2025-09-02 00:00:08.598994 | Job console starting 2025-09-02 00:00:08.625834 | Updating git repos 2025-09-02 00:00:08.701853 | Cloning repos into workspace 2025-09-02 00:00:08.929374 | Restoring repo states 2025-09-02 00:00:08.955315 | Merging changes 2025-09-02 00:00:08.955339 | Checking out repos 2025-09-02 00:00:09.347770 | Preparing playbooks 2025-09-02 00:00:10.343497 | Running Ansible setup 2025-09-02 00:00:15.597894 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-02 00:00:17.196179 | 2025-09-02 00:00:17.196312 | PLAY [Base pre] 2025-09-02 00:00:17.218547 | 2025-09-02 00:00:17.218665 | TASK [Setup log path fact] 2025-09-02 00:00:17.237120 | orchestrator | ok 2025-09-02 00:00:17.259636 | 2025-09-02 00:00:17.259787 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-02 00:00:17.288421 | orchestrator | ok 2025-09-02 00:00:17.316036 | 2025-09-02 00:00:17.316148 | TASK [emit-job-header : Print job information] 2025-09-02 00:00:17.385920 | # Job Information 2025-09-02 00:00:17.386069 | Ansible Version: 2.16.14 2025-09-02 00:00:17.386103 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-02 00:00:17.386137 | Pipeline: periodic-midnight 2025-09-02 00:00:17.386160 | Executor: 521e9411259a 2025-09-02 00:00:17.386669 | Triggered by: https://github.com/osism/testbed 2025-09-02 00:00:17.386722 | Event ID: aad1d57f49de448fa6db7453d0658f5c 2025-09-02 00:00:17.403432 | 2025-09-02 00:00:17.403539 | LOOP [emit-job-header : Print node information] 2025-09-02 00:00:17.745670 | orchestrator | ok: 2025-09-02 00:00:17.748183 | orchestrator | # Node Information 2025-09-02 00:00:17.748318 | orchestrator | Inventory Hostname: orchestrator 2025-09-02 00:00:17.748351 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-02 00:00:17.748376 | orchestrator | Username: zuul-testbed05 2025-09-02 00:00:17.748398 | orchestrator | Distro: Debian 12.11 2025-09-02 00:00:17.748423 | orchestrator | Provider: static-testbed 2025-09-02 00:00:17.748446 | orchestrator | Region: 2025-09-02 00:00:17.748468 | orchestrator | Label: testbed-orchestrator 2025-09-02 00:00:17.748488 | orchestrator | Product Name: OpenStack Nova 2025-09-02 00:00:17.748508 | orchestrator | Interface IP: 81.163.193.140 2025-09-02 00:00:17.767523 | 2025-09-02 00:00:17.767631 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-02 00:00:19.127570 | orchestrator -> localhost | changed 2025-09-02 00:00:19.137240 | 2025-09-02 00:00:19.137654 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-02 00:00:21.863510 | orchestrator -> localhost | changed 2025-09-02 00:00:21.877553 | 2025-09-02 00:00:21.877649 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-02 00:00:22.520292 | orchestrator -> localhost | ok 2025-09-02 00:00:22.526194 | 2025-09-02 00:00:22.526331 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-02 00:00:22.555331 | orchestrator | ok 2025-09-02 00:00:22.586945 | orchestrator | included: /var/lib/zuul/builds/0258edba1581438491dbc4abeb4bfa2c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-02 00:00:22.593594 | 2025-09-02 00:00:22.614026 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-02 00:00:24.069309 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-02 00:00:24.069484 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0258edba1581438491dbc4abeb4bfa2c/work/0258edba1581438491dbc4abeb4bfa2c_id_rsa 2025-09-02 00:00:24.069516 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0258edba1581438491dbc4abeb4bfa2c/work/0258edba1581438491dbc4abeb4bfa2c_id_rsa.pub 2025-09-02 00:00:24.069537 | orchestrator -> localhost | The key fingerprint is: 2025-09-02 00:00:24.069559 | orchestrator -> localhost | SHA256:kGw6YfaCa8XcOzwjDyGk1eMKgnSWYWOHMz/L0ewySWw zuul-build-sshkey 2025-09-02 00:00:24.069578 | orchestrator -> localhost | The key's randomart image is: 2025-09-02 00:00:24.069604 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-02 00:00:24.069623 | orchestrator -> localhost | | =.. | 2025-09-02 00:00:24.069641 | orchestrator -> localhost | | o==. . | 2025-09-02 00:00:24.069658 | orchestrator -> localhost | | .o+@ B | 2025-09-02 00:00:24.069674 | orchestrator -> localhost | |o+oB E + | 2025-09-02 00:00:24.069690 | orchestrator -> localhost | |= o & O S | 2025-09-02 00:00:24.069711 | orchestrator -> localhost | |.. = @ o | 2025-09-02 00:00:24.069728 | orchestrator -> localhost | | + o O | 2025-09-02 00:00:24.069744 | orchestrator -> localhost | | . + + | 2025-09-02 00:00:24.069761 | orchestrator -> localhost | | . | 2025-09-02 00:00:24.069778 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-02 00:00:24.069819 | orchestrator -> localhost | ok: Runtime: 0:00:00.295159 2025-09-02 00:00:24.075947 | 2025-09-02 00:00:24.076051 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-02 00:00:24.117806 | orchestrator | ok 2025-09-02 00:00:24.137301 | orchestrator | included: /var/lib/zuul/builds/0258edba1581438491dbc4abeb4bfa2c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-02 00:00:24.152906 | 2025-09-02 00:00:24.152997 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-02 00:00:24.188127 | orchestrator | skipping: Conditional result was False 2025-09-02 00:00:24.203481 | 2025-09-02 00:00:24.203596 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-02 00:00:25.136437 | orchestrator | changed 2025-09-02 00:00:25.145381 | 2025-09-02 00:00:25.145473 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-02 00:00:25.455456 | orchestrator | ok 2025-09-02 00:00:25.460589 | 2025-09-02 00:00:25.460670 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-02 00:00:25.961799 | orchestrator | ok 2025-09-02 00:00:25.975461 | 2025-09-02 00:00:25.975561 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-02 00:00:26.472482 | orchestrator | ok 2025-09-02 00:00:26.477389 | 2025-09-02 00:00:26.477466 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-02 00:00:26.505016 | orchestrator | skipping: Conditional result was False 2025-09-02 00:00:26.511598 | 2025-09-02 00:00:26.511691 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-02 00:00:27.662474 | orchestrator -> localhost | changed 2025-09-02 00:00:27.673614 | 2025-09-02 00:00:27.673692 | TASK [add-build-sshkey : Add back temp key] 2025-09-02 00:00:28.537444 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0258edba1581438491dbc4abeb4bfa2c/work/0258edba1581438491dbc4abeb4bfa2c_id_rsa (zuul-build-sshkey) 2025-09-02 00:00:28.537627 | orchestrator -> localhost | ok: Runtime: 0:00:00.033008 2025-09-02 00:00:28.543503 | 2025-09-02 00:00:28.543582 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-02 00:00:29.134036 | orchestrator | ok 2025-09-02 00:00:29.143012 | 2025-09-02 00:00:29.143104 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-02 00:00:29.175831 | orchestrator | skipping: Conditional result was False 2025-09-02 00:00:29.271932 | 2025-09-02 00:00:29.272054 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-02 00:00:29.824538 | orchestrator | ok 2025-09-02 00:00:29.844985 | 2025-09-02 00:00:29.845080 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-02 00:00:29.897328 | orchestrator | ok 2025-09-02 00:00:29.908517 | 2025-09-02 00:00:29.908615 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-02 00:00:30.307585 | orchestrator -> localhost | ok 2025-09-02 00:00:30.315577 | 2025-09-02 00:00:30.315689 | TASK [validate-host : Collect information about the host] 2025-09-02 00:00:31.747550 | orchestrator | ok 2025-09-02 00:00:31.791913 | 2025-09-02 00:00:31.792052 | TASK [validate-host : Sanitize hostname] 2025-09-02 00:00:31.884283 | orchestrator | ok 2025-09-02 00:00:31.889545 | 2025-09-02 00:00:31.889635 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-02 00:00:32.827429 | orchestrator -> localhost | changed 2025-09-02 00:00:32.835649 | 2025-09-02 00:00:32.835773 | TASK [validate-host : Collect information about zuul worker] 2025-09-02 00:00:33.443326 | orchestrator | ok 2025-09-02 00:00:33.448541 | 2025-09-02 00:00:33.448634 | TASK [validate-host : Write out all zuul information for each host] 2025-09-02 00:00:35.061463 | orchestrator -> localhost | changed 2025-09-02 00:00:35.070647 | 2025-09-02 00:00:35.070727 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-02 00:00:35.381724 | orchestrator | ok 2025-09-02 00:00:35.388177 | 2025-09-02 00:00:35.388281 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-02 00:01:05.946874 | orchestrator | changed: 2025-09-02 00:01:05.947108 | orchestrator | .d..t...... src/ 2025-09-02 00:01:05.947158 | orchestrator | .d..t...... src/github.com/ 2025-09-02 00:01:05.947208 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-02 00:01:05.947241 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-02 00:01:05.947272 | orchestrator | RedHat.yml 2025-09-02 00:01:05.963002 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-02 00:01:05.963020 | orchestrator | RedHat.yml 2025-09-02 00:01:05.963074 | orchestrator | = 1.53.0"... 2025-09-02 00:01:19.498286 | orchestrator | 00:01:19.498 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-02 00:01:19.562466 | orchestrator | 00:01:19.562 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-02 00:01:21.782930 | orchestrator | 00:01:21.782 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-02 00:01:22.615958 | orchestrator | 00:01:22.615 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-02 00:01:23.098775 | orchestrator | 00:01:23.098 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-02 00:01:23.792460 | orchestrator | 00:01:23.792 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-02 00:01:24.179207 | orchestrator | 00:01:24.178 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-02 00:01:24.779523 | orchestrator | 00:01:24.779 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-02 00:01:24.779638 | orchestrator | 00:01:24.779 STDOUT terraform: Providers are signed by their developers. 2025-09-02 00:01:24.779650 | orchestrator | 00:01:24.779 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-02 00:01:24.779659 | orchestrator | 00:01:24.779 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-02 00:01:24.779770 | orchestrator | 00:01:24.779 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-02 00:01:24.779828 | orchestrator | 00:01:24.779 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-02 00:01:24.779875 | orchestrator | 00:01:24.779 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-02 00:01:24.779887 | orchestrator | 00:01:24.779 STDOUT terraform: you run "tofu init" in the future. 2025-09-02 00:01:24.780556 | orchestrator | 00:01:24.780 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-02 00:01:24.780657 | orchestrator | 00:01:24.780 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-02 00:01:24.780699 | orchestrator | 00:01:24.780 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-02 00:01:24.780710 | orchestrator | 00:01:24.780 STDOUT terraform: should now work. 2025-09-02 00:01:24.780760 | orchestrator | 00:01:24.780 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-02 00:01:24.780827 | orchestrator | 00:01:24.780 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-02 00:01:24.780889 | orchestrator | 00:01:24.780 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-02 00:01:24.882935 | orchestrator | 00:01:24.882 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-02 00:01:24.883122 | orchestrator | 00:01:24.882 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-02 00:01:25.101325 | orchestrator | 00:01:25.100 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-02 00:01:25.101426 | orchestrator | 00:01:25.100 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-02 00:01:25.101440 | orchestrator | 00:01:25.101 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-02 00:01:25.101450 | orchestrator | 00:01:25.101 STDOUT terraform: for this configuration. 2025-09-02 00:01:25.220870 | orchestrator | 00:01:25.220 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-02 00:01:25.220955 | orchestrator | 00:01:25.220 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-02 00:01:25.315955 | orchestrator | 00:01:25.315 STDOUT terraform: ci.auto.tfvars 2025-09-02 00:01:25.316051 | orchestrator | 00:01:25.315 STDOUT terraform: default_custom.tf 2025-09-02 00:01:25.444010 | orchestrator | 00:01:25.443 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-02 00:01:26.394593 | orchestrator | 00:01:26.394 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-02 00:01:26.953111 | orchestrator | 00:01:26.952 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-02 00:01:27.294907 | orchestrator | 00:01:27.294 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-02 00:01:27.295665 | orchestrator | 00:01:27.294 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-02 00:01:27.295760 | orchestrator | 00:01:27.294 STDOUT terraform:  + create 2025-09-02 00:01:27.295924 | orchestrator | 00:01:27.294 STDOUT terraform:  <= read (data resources) 2025-09-02 00:01:27.295990 | orchestrator | 00:01:27.294 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-02 00:01:27.296083 | orchestrator | 00:01:27.294 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-02 00:01:27.296098 | orchestrator | 00:01:27.294 STDOUT terraform:  # (config refers to values not yet known) 2025-09-02 00:01:27.296152 | orchestrator | 00:01:27.294 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-02 00:01:27.296164 | orchestrator | 00:01:27.294 STDOUT terraform:  + checksum = (known after apply) 2025-09-02 00:01:27.296174 | orchestrator | 00:01:27.294 STDOUT terraform:  + created_at = (known after apply) 2025-09-02 00:01:27.296208 | orchestrator | 00:01:27.294 STDOUT terraform:  + file = (known after apply) 2025-09-02 00:01:27.296578 | orchestrator | 00:01:27.294 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.296690 | orchestrator | 00:01:27.294 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.296720 | orchestrator | 00:01:27.294 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-02 00:01:27.296728 | orchestrator | 00:01:27.294 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-02 00:01:27.296736 | orchestrator | 00:01:27.294 STDOUT terraform:  + most_recent = true 2025-09-02 00:01:27.296745 | orchestrator | 00:01:27.294 STDOUT terraform:  + name = (known after apply) 2025-09-02 00:01:27.296753 | orchestrator | 00:01:27.294 STDOUT terraform:  + protected = (known after apply) 2025-09-02 00:01:27.296760 | orchestrator | 00:01:27.294 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.296769 | orchestrator | 00:01:27.294 STDOUT terraform:  + schema = (known after apply) 2025-09-02 00:01:27.296777 | orchestrator | 00:01:27.294 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-02 00:01:27.296785 | orchestrator | 00:01:27.294 STDOUT terraform:  + tags = (known after apply) 2025-09-02 00:01:27.296793 | orchestrator | 00:01:27.294 STDOUT terraform:  + updated_at = (known after apply) 2025-09-02 00:01:27.296801 | orchestrator | 00:01:27.294 STDOUT terraform:  } 2025-09-02 00:01:27.296813 | orchestrator | 00:01:27.294 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-02 00:01:27.296821 | orchestrator | 00:01:27.294 STDOUT terraform:  # (config refers to values not yet known) 2025-09-02 00:01:27.296829 | orchestrator | 00:01:27.294 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-02 00:01:27.296837 | orchestrator | 00:01:27.294 STDOUT terraform:  + checksum = (known after apply) 2025-09-02 00:01:27.296845 | orchestrator | 00:01:27.295 STDOUT terraform:  + created_at = (known after apply) 2025-09-02 00:01:27.296860 | orchestrator | 00:01:27.295 STDOUT terraform:  + file = (known after apply) 2025-09-02 00:01:27.296868 | orchestrator | 00:01:27.295 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.296876 | orchestrator | 00:01:27.295 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.296884 | orchestrator | 00:01:27.295 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-02 00:01:27.296892 | orchestrator | 00:01:27.295 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-02 00:01:27.296900 | orchestrator | 00:01:27.295 STDOUT terraform:  + most_recent = true 2025-09-02 00:01:27.296908 | orchestrator | 00:01:27.295 STDOUT terraform:  + name = (known after apply) 2025-09-02 00:01:27.296915 | orchestrator | 00:01:27.295 STDOUT terraform:  + protected = (known after apply) 2025-09-02 00:01:27.296923 | orchestrator | 00:01:27.295 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.296931 | orchestrator | 00:01:27.295 STDOUT terraform:  + schema = (known after apply) 2025-09-02 00:01:27.296939 | orchestrator | 00:01:27.295 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-02 00:01:27.296947 | orchestrator | 00:01:27.295 STDOUT terraform:  + tags = (known after apply) 2025-09-02 00:01:27.296954 | orchestrator | 00:01:27.295 STDOUT terraform:  + updated_at = (known after apply) 2025-09-02 00:01:27.296962 | orchestrator | 00:01:27.295 STDOUT terraform:  } 2025-09-02 00:01:27.296970 | orchestrator | 00:01:27.295 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-02 00:01:27.296985 | orchestrator | 00:01:27.295 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-02 00:01:27.296993 | orchestrator | 00:01:27.295 STDOUT terraform:  + content = (known after apply) 2025-09-02 00:01:27.297001 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-02 00:01:27.297009 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-02 00:01:27.297016 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-02 00:01:27.297041 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-02 00:01:27.297060 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-02 00:01:27.297069 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-02 00:01:27.297078 | orchestrator | 00:01:27.295 STDOUT terraform:  + directory_permission = "0777" 2025-09-02 00:01:27.297086 | orchestrator | 00:01:27.295 STDOUT terraform:  + file_permission = "0644" 2025-09-02 00:01:27.297093 | orchestrator | 00:01:27.295 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-02 00:01:27.297101 | orchestrator | 00:01:27.295 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.297109 | orchestrator | 00:01:27.295 STDOUT terraform:  } 2025-09-02 00:01:27.297117 | orchestrator | 00:01:27.295 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-02 00:01:27.297125 | orchestrator | 00:01:27.295 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-02 00:01:27.297133 | orchestrator | 00:01:27.295 STDOUT terraform:  + content = (known after apply) 2025-09-02 00:01:27.297140 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-02 00:01:27.297148 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-02 00:01:27.297156 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-02 00:01:27.297164 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-02 00:01:27.297172 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-02 00:01:27.297184 | orchestrator | 00:01:27.295 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-02 00:01:27.297192 | orchestrator | 00:01:27.295 STDOUT terraform:  + directory_permission = "0777" 2025-09-02 00:01:27.297200 | orchestrator | 00:01:27.296 STDOUT terraform:  + file_permission = "0644" 2025-09-02 00:01:27.297208 | orchestrator | 00:01:27.296 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-02 00:01:27.297216 | orchestrator | 00:01:27.296 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.297224 | orchestrator | 00:01:27.296 STDOUT terraform:  } 2025-09-02 00:01:27.297232 | orchestrator | 00:01:27.296 STDOUT terraform:  # local_file.inventory will be created 2025-09-02 00:01:27.297239 | orchestrator | 00:01:27.296 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-02 00:01:27.297247 | orchestrator | 00:01:27.296 STDOUT terraform:  + content = (known after apply) 2025-09-02 00:01:27.297261 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-02 00:01:27.297269 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-02 00:01:27.297277 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-02 00:01:27.297285 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-02 00:01:27.297293 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-02 00:01:27.297301 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-02 00:01:27.297309 | orchestrator | 00:01:27.296 STDOUT terraform:  + directory_permission = "0777" 2025-09-02 00:01:27.297316 | orchestrator | 00:01:27.296 STDOUT terraform:  + file_permission = "0644" 2025-09-02 00:01:27.297324 | orchestrator | 00:01:27.296 STDOUT terraform:  + filename = "inventory.ci" 2025-09-02 00:01:27.297332 | orchestrator | 00:01:27.296 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.297340 | orchestrator | 00:01:27.296 STDOUT terraform:  } 2025-09-02 00:01:27.297348 | orchestrator | 00:01:27.296 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-02 00:01:27.297356 | orchestrator | 00:01:27.296 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-02 00:01:27.297364 | orchestrator | 00:01:27.296 STDOUT terraform:  + content = (sensitive value) 2025-09-02 00:01:27.297378 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-02 00:01:27.297386 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-02 00:01:27.297394 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-02 00:01:27.297402 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-02 00:01:27.297410 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-02 00:01:27.297418 | orchestrator | 00:01:27.296 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-02 00:01:27.297425 | orchestrator | 00:01:27.296 STDOUT terraform:  + directory_permission = "0700" 2025-09-02 00:01:27.297433 | orchestrator | 00:01:27.296 STDOUT terraform:  + file_permission = "0600" 2025-09-02 00:01:27.297441 | orchestrator | 00:01:27.296 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-02 00:01:27.297453 | orchestrator | 00:01:27.296 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.297461 | orchestrator | 00:01:27.296 STDOUT terraform:  } 2025-09-02 00:01:27.297469 | orchestrator | 00:01:27.296 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-02 00:01:27.297477 | orchestrator | 00:01:27.296 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-02 00:01:27.297485 | orchestrator | 00:01:27.296 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.297494 | orchestrator | 00:01:27.296 STDOUT terraform:  } 2025-09-02 00:01:27.297502 | orchestrator | 00:01:27.296 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-02 00:01:27.297516 | orchestrator | 00:01:27.296 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-02 00:01:27.297525 | orchestrator | 00:01:27.297 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.297533 | orchestrator | 00:01:27.297 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.297541 | orchestrator | 00:01:27.297 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.297549 | orchestrator | 00:01:27.297 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.297557 | orchestrator | 00:01:27.297 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.297565 | orchestrator | 00:01:27.297 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-02 00:01:27.297573 | orchestrator | 00:01:27.297 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.297581 | orchestrator | 00:01:27.297 STDOUT terraform:  + size = 80 2025-09-02 00:01:27.297589 | orchestrator | 00:01:27.297 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.297597 | orchestrator | 00:01:27.297 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.297605 | orchestrator | 00:01:27.297 STDOUT terraform:  } 2025-09-02 00:01:27.297617 | orchestrator | 00:01:27.297 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-02 00:01:27.297625 | orchestrator | 00:01:27.297 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-02 00:01:27.297633 | orchestrator | 00:01:27.297 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.297641 | orchestrator | 00:01:27.297 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.297649 | orchestrator | 00:01:27.297 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.297657 | orchestrator | 00:01:27.297 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.297665 | orchestrator | 00:01:27.297 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.297673 | orchestrator | 00:01:27.297 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-02 00:01:27.297684 | orchestrator | 00:01:27.297 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.297692 | orchestrator | 00:01:27.297 STDOUT terraform:  + size = 80 2025-09-02 00:01:27.297700 | orchestrator | 00:01:27.297 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.297711 | orchestrator | 00:01:27.297 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.297720 | orchestrator | 00:01:27.297 STDOUT terraform:  } 2025-09-02 00:01:27.297772 | orchestrator | 00:01:27.297 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-02 00:01:27.297804 | orchestrator | 00:01:27.297 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-02 00:01:27.297835 | orchestrator | 00:01:27.297 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.297884 | orchestrator | 00:01:27.297 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.298152 | orchestrator | 00:01:27.297 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.298222 | orchestrator | 00:01:27.297 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.298257 | orchestrator | 00:01:27.297 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.304367 | orchestrator | 00:01:27.297 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-02 00:01:27.304514 | orchestrator | 00:01:27.304 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.304562 | orchestrator | 00:01:27.304 STDOUT terraform:  + size = 80 2025-09-02 00:01:27.304620 | orchestrator | 00:01:27.304 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.304667 | orchestrator | 00:01:27.304 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.304698 | orchestrator | 00:01:27.304 STDOUT terraform:  } 2025-09-02 00:01:27.304779 | orchestrator | 00:01:27.304 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-02 00:01:27.304854 | orchestrator | 00:01:27.304 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-02 00:01:27.304919 | orchestrator | 00:01:27.304 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.304963 | orchestrator | 00:01:27.304 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.305042 | orchestrator | 00:01:27.304 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.305103 | orchestrator | 00:01:27.305 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.305173 | orchestrator | 00:01:27.305 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.305251 | orchestrator | 00:01:27.305 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-02 00:01:27.305322 | orchestrator | 00:01:27.305 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.305362 | orchestrator | 00:01:27.305 STDOUT terraform:  + size = 80 2025-09-02 00:01:27.305405 | orchestrator | 00:01:27.305 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.305453 | orchestrator | 00:01:27.305 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.305483 | orchestrator | 00:01:27.305 STDOUT terraform:  } 2025-09-02 00:01:27.305561 | orchestrator | 00:01:27.305 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-02 00:01:27.305635 | orchestrator | 00:01:27.305 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-02 00:01:27.305695 | orchestrator | 00:01:27.305 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.305739 | orchestrator | 00:01:27.305 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.305802 | orchestrator | 00:01:27.305 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.305865 | orchestrator | 00:01:27.305 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.305930 | orchestrator | 00:01:27.305 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.306071 | orchestrator | 00:01:27.305 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-02 00:01:27.306141 | orchestrator | 00:01:27.306 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.306182 | orchestrator | 00:01:27.306 STDOUT terraform:  + size = 80 2025-09-02 00:01:27.306226 | orchestrator | 00:01:27.306 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.306272 | orchestrator | 00:01:27.306 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.306302 | orchestrator | 00:01:27.306 STDOUT terraform:  } 2025-09-02 00:01:27.306377 | orchestrator | 00:01:27.306 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-02 00:01:27.306452 | orchestrator | 00:01:27.306 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-02 00:01:27.306515 | orchestrator | 00:01:27.306 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.306557 | orchestrator | 00:01:27.306 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.306611 | orchestrator | 00:01:27.306 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.306665 | orchestrator | 00:01:27.306 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.306719 | orchestrator | 00:01:27.306 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.306786 | orchestrator | 00:01:27.306 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-02 00:01:27.306841 | orchestrator | 00:01:27.306 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.306876 | orchestrator | 00:01:27.306 STDOUT terraform:  + size = 80 2025-09-02 00:01:27.306917 | orchestrator | 00:01:27.306 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.306957 | orchestrator | 00:01:27.306 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.306983 | orchestrator | 00:01:27.306 STDOUT terraform:  } 2025-09-02 00:01:27.307065 | orchestrator | 00:01:27.306 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-02 00:01:27.307134 | orchestrator | 00:01:27.307 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-02 00:01:27.307188 | orchestrator | 00:01:27.307 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.307226 | orchestrator | 00:01:27.307 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.307285 | orchestrator | 00:01:27.307 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.307339 | orchestrator | 00:01:27.307 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.307392 | orchestrator | 00:01:27.307 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.307458 | orchestrator | 00:01:27.307 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-02 00:01:27.307513 | orchestrator | 00:01:27.307 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.307548 | orchestrator | 00:01:27.307 STDOUT terraform:  + size = 80 2025-09-02 00:01:27.307598 | orchestrator | 00:01:27.307 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.307639 | orchestrator | 00:01:27.307 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.307665 | orchestrator | 00:01:27.307 STDOUT terraform:  } 2025-09-02 00:01:27.307732 | orchestrator | 00:01:27.307 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-02 00:01:27.307795 | orchestrator | 00:01:27.307 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-02 00:01:27.307847 | orchestrator | 00:01:27.307 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.307886 | orchestrator | 00:01:27.307 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.307942 | orchestrator | 00:01:27.307 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.307995 | orchestrator | 00:01:27.307 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.308065 | orchestrator | 00:01:27.308 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-02 00:01:27.308140 | orchestrator | 00:01:27.308 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.308272 | orchestrator | 00:01:27.308 STDOUT terraform:  + size = 20 2025-09-02 00:01:27.308534 | orchestrator | 00:01:27.308 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.308759 | orchestrator | 00:01:27.308 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.308877 | orchestrator | 00:01:27.308 STDOUT terraform:  } 2025-09-02 00:01:27.309060 | orchestrator | 00:01:27.308 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-02 00:01:27.309286 | orchestrator | 00:01:27.309 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-02 00:01:27.309579 | orchestrator | 00:01:27.309 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.309676 | orchestrator | 00:01:27.309 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.309889 | orchestrator | 00:01:27.309 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.310212 | orchestrator | 00:01:27.309 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.310458 | orchestrator | 00:01:27.310 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-02 00:01:27.310693 | orchestrator | 00:01:27.310 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.310824 | orchestrator | 00:01:27.310 STDOUT terraform:  + size = 20 2025-09-02 00:01:27.310995 | orchestrator | 00:01:27.310 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.311156 | orchestrator | 00:01:27.311 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.311246 | orchestrator | 00:01:27.311 STDOUT terraform:  } 2025-09-02 00:01:27.311469 | orchestrator | 00:01:27.311 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-02 00:01:27.311683 | orchestrator | 00:01:27.311 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-02 00:01:27.311869 | orchestrator | 00:01:27.311 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.312047 | orchestrator | 00:01:27.311 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.312256 | orchestrator | 00:01:27.312 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.312446 | orchestrator | 00:01:27.312 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.312633 | orchestrator | 00:01:27.312 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-02 00:01:27.312820 | orchestrator | 00:01:27.312 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.312949 | orchestrator | 00:01:27.312 STDOUT terraform:  + size = 20 2025-09-02 00:01:27.313159 | orchestrator | 00:01:27.312 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.316220 | orchestrator | 00:01:27.313 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.316257 | orchestrator | 00:01:27.316 STDOUT terraform:  } 2025-09-02 00:01:27.316315 | orchestrator | 00:01:27.316 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-02 00:01:27.316373 | orchestrator | 00:01:27.316 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-02 00:01:27.316424 | orchestrator | 00:01:27.316 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.316458 | orchestrator | 00:01:27.316 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.316504 | orchestrator | 00:01:27.316 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.316549 | orchestrator | 00:01:27.316 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.316624 | orchestrator | 00:01:27.316 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-02 00:01:27.316672 | orchestrator | 00:01:27.316 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.316701 | orchestrator | 00:01:27.316 STDOUT terraform:  + size = 20 2025-09-02 00:01:27.316735 | orchestrator | 00:01:27.316 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.316770 | orchestrator | 00:01:27.316 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.316793 | orchestrator | 00:01:27.316 STDOUT terraform:  } 2025-09-02 00:01:27.316847 | orchestrator | 00:01:27.316 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-02 00:01:27.316900 | orchestrator | 00:01:27.316 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-02 00:01:27.316946 | orchestrator | 00:01:27.316 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.316984 | orchestrator | 00:01:27.316 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.317044 | orchestrator | 00:01:27.316 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.317088 | orchestrator | 00:01:27.317 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.317142 | orchestrator | 00:01:27.317 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-02 00:01:27.317186 | orchestrator | 00:01:27.317 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.317225 | orchestrator | 00:01:27.317 STDOUT terraform:  + size = 20 2025-09-02 00:01:27.317258 | orchestrator | 00:01:27.317 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.317292 | orchestrator | 00:01:27.317 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.317314 | orchestrator | 00:01:27.317 STDOUT terraform:  } 2025-09-02 00:01:27.317370 | orchestrator | 00:01:27.317 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-02 00:01:27.317420 | orchestrator | 00:01:27.317 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-02 00:01:27.317465 | orchestrator | 00:01:27.317 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.317496 | orchestrator | 00:01:27.317 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.317543 | orchestrator | 00:01:27.317 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.317587 | orchestrator | 00:01:27.317 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.317636 | orchestrator | 00:01:27.317 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-02 00:01:27.317683 | orchestrator | 00:01:27.317 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.317711 | orchestrator | 00:01:27.317 STDOUT terraform:  + size = 20 2025-09-02 00:01:27.317743 | orchestrator | 00:01:27.317 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.317777 | orchestrator | 00:01:27.317 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.317799 | orchestrator | 00:01:27.317 STDOUT terraform:  } 2025-09-02 00:01:27.317858 | orchestrator | 00:01:27.317 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-02 00:01:27.318095 | orchestrator | 00:01:27.317 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-02 00:01:27.322147 | orchestrator | 00:01:27.322 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.322183 | orchestrator | 00:01:27.322 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.322229 | orchestrator | 00:01:27.322 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.322275 | orchestrator | 00:01:27.322 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.322322 | orchestrator | 00:01:27.322 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-02 00:01:27.322365 | orchestrator | 00:01:27.322 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.322393 | orchestrator | 00:01:27.322 STDOUT terraform:  + size = 20 2025-09-02 00:01:27.322425 | orchestrator | 00:01:27.322 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.322457 | orchestrator | 00:01:27.322 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.322477 | orchestrator | 00:01:27.322 STDOUT terraform:  } 2025-09-02 00:01:27.322531 | orchestrator | 00:01:27.322 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-02 00:01:27.322580 | orchestrator | 00:01:27.322 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-02 00:01:27.322630 | orchestrator | 00:01:27.322 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.322660 | orchestrator | 00:01:27.322 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.322705 | orchestrator | 00:01:27.322 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.322747 | orchestrator | 00:01:27.322 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.322798 | orchestrator | 00:01:27.322 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-02 00:01:27.322841 | orchestrator | 00:01:27.322 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.322875 | orchestrator | 00:01:27.322 STDOUT terraform:  + size = 20 2025-09-02 00:01:27.322906 | orchestrator | 00:01:27.322 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.322937 | orchestrator | 00:01:27.322 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.322957 | orchestrator | 00:01:27.322 STDOUT terraform:  } 2025-09-02 00:01:27.323008 | orchestrator | 00:01:27.322 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-02 00:01:27.323071 | orchestrator | 00:01:27.323 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-02 00:01:27.323113 | orchestrator | 00:01:27.323 STDOUT terraform:  + attachment = (known after apply) 2025-09-02 00:01:27.323144 | orchestrator | 00:01:27.323 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.323186 | orchestrator | 00:01:27.323 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.323228 | orchestrator | 00:01:27.323 STDOUT terraform:  + metadata = (known after apply) 2025-09-02 00:01:27.323281 | orchestrator | 00:01:27.323 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-02 00:01:27.323326 | orchestrator | 00:01:27.323 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.323353 | orchestrator | 00:01:27.323 STDOUT terraform:  + size = 20 2025-09-02 00:01:27.323383 | orchestrator | 00:01:27.323 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-02 00:01:27.323414 | orchestrator | 00:01:27.323 STDOUT terraform:  + volume_type = "ssd" 2025-09-02 00:01:27.323437 | orchestrator | 00:01:27.323 STDOUT terraform:  } 2025-09-02 00:01:27.323486 | orchestrator | 00:01:27.323 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-02 00:01:27.323536 | orchestrator | 00:01:27.323 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-02 00:01:27.323577 | orchestrator | 00:01:27.323 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-02 00:01:27.323618 | orchestrator | 00:01:27.323 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-02 00:01:27.323658 | orchestrator | 00:01:27.323 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-02 00:01:27.323700 | orchestrator | 00:01:27.323 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.323731 | orchestrator | 00:01:27.323 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.323763 | orchestrator | 00:01:27.323 STDOUT terraform:  + config_drive = true 2025-09-02 00:01:27.323806 | orchestrator | 00:01:27.323 STDOUT terraform:  + created = (known after apply) 2025-09-02 00:01:27.323846 | orchestrator | 00:01:27.323 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-02 00:01:27.323882 | orchestrator | 00:01:27.323 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-02 00:01:27.323914 | orchestrator | 00:01:27.323 STDOUT terraform:  + force_delete = false 2025-09-02 00:01:27.323956 | orchestrator | 00:01:27.323 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-02 00:01:27.323997 | orchestrator | 00:01:27.323 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.324068 | orchestrator | 00:01:27.324 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.324111 | orchestrator | 00:01:27.324 STDOUT terraform:  + image_name = (known after apply) 2025-09-02 00:01:27.324143 | orchestrator | 00:01:27.324 STDOUT terraform:  + key_pair = "testbed" 2025-09-02 00:01:27.324179 | orchestrator | 00:01:27.324 STDOUT terraform:  + name = "testbed-manager" 2025-09-02 00:01:27.324210 | orchestrator | 00:01:27.324 STDOUT terraform:  + power_state = "active" 2025-09-02 00:01:27.324250 | orchestrator | 00:01:27.324 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.324291 | orchestrator | 00:01:27.324 STDOUT terraform:  + security_groups = (known after apply) 2025-09-02 00:01:27.324322 | orchestrator | 00:01:27.324 STDOUT terraform:  + stop_before_destroy = false 2025-09-02 00:01:27.324363 | orchestrator | 00:01:27.324 STDOUT terraform:  + updated = (known after apply) 2025-09-02 00:01:27.324400 | orchestrator | 00:01:27.324 STDOUT terraform:  + user_data = (sensitive value) 2025-09-02 00:01:27.324423 | orchestrator | 00:01:27.324 STDOUT terraform:  + block_device { 2025-09-02 00:01:27.324455 | orchestrator | 00:01:27.324 STDOUT terraform:  + boot_index = 0 2025-09-02 00:01:27.324492 | orchestrator | 00:01:27.324 STDOUT terraform:  + delete_on_termination = false 2025-09-02 00:01:27.324531 | orchestrator | 00:01:27.324 STDOUT terraform:  + destination_type = "volume" 2025-09-02 00:01:27.324566 | orchestrator | 00:01:27.324 STDOUT terraform:  + multiattach = false 2025-09-02 00:01:27.324602 | orchestrator | 00:01:27.324 STDOUT terraform:  + source_type = "volume" 2025-09-02 00:01:27.324647 | orchestrator | 00:01:27.324 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.324668 | orchestrator | 00:01:27.324 STDOUT terraform:  } 2025-09-02 00:01:27.324690 | orchestrator | 00:01:27.324 STDOUT terraform:  + network { 2025-09-02 00:01:27.324716 | orchestrator | 00:01:27.324 STDOUT terraform:  + access_network = false 2025-09-02 00:01:27.324753 | orchestrator | 00:01:27.324 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-02 00:01:27.324791 | orchestrator | 00:01:27.324 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-02 00:01:27.324831 | orchestrator | 00:01:27.324 STDOUT terraform:  + mac = (known after apply) 2025-09-02 00:01:27.324876 | orchestrator | 00:01:27.324 STDOUT terraform:  + name = (known after apply) 2025-09-02 00:01:27.324930 | orchestrator | 00:01:27.324 STDOUT terraform:  + port = (known after apply) 2025-09-02 00:01:27.325029 | orchestrator | 00:01:27.324 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.325077 | orchestrator | 00:01:27.325 STDOUT terraform:  } 2025-09-02 00:01:27.325164 | orchestrator | 00:01:27.325 STDOUT terraform:  } 2025-09-02 00:01:27.325273 | orchestrator | 00:01:27.325 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-02 00:01:27.325559 | orchestrator | 00:01:27.325 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-02 00:01:27.325757 | orchestrator | 00:01:27.325 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-02 00:01:27.325985 | orchestrator | 00:01:27.325 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-02 00:01:27.326133 | orchestrator | 00:01:27.326 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-02 00:01:27.327332 | orchestrator | 00:01:27.327 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.328178 | orchestrator | 00:01:27.327 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.334072 | orchestrator | 00:01:27.328 STDOUT terraform:  + config_drive = true 2025-09-02 00:01:27.334095 | orchestrator | 00:01:27.328 STDOUT terraform:  + created = (known after apply) 2025-09-02 00:01:27.334099 | orchestrator | 00:01:27.328 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-02 00:01:27.334103 | orchestrator | 00:01:27.328 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-02 00:01:27.334107 | orchestrator | 00:01:27.328 STDOUT terraform:  + force_delete = false 2025-09-02 00:01:27.334111 | orchestrator | 00:01:27.328 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-02 00:01:27.334115 | orchestrator | 00:01:27.328 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.334118 | orchestrator | 00:01:27.328 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.334122 | orchestrator | 00:01:27.328 STDOUT terraform:  + image_name = (known after apply) 2025-09-02 00:01:27.334126 | orchestrator | 00:01:27.328 STDOUT terraform:  + key_pair = "testbed" 2025-09-02 00:01:27.334130 | orchestrator | 00:01:27.328 STDOUT terraform:  + name = "testbed-node-0" 2025-09-02 00:01:27.334133 | orchestrator | 00:01:27.328 STDOUT terraform:  + power_state = "active" 2025-09-02 00:01:27.334137 | orchestrator | 00:01:27.328 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.334141 | orchestrator | 00:01:27.328 STDOUT terraform:  + security_groups = (known after apply) 2025-09-02 00:01:27.334144 | orchestrator | 00:01:27.328 STDOUT terraform:  + stop_before_destroy = false 2025-09-02 00:01:27.334148 | orchestrator | 00:01:27.328 STDOUT terraform:  + updated = (known after apply) 2025-09-02 00:01:27.334152 | orchestrator | 00:01:27.328 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-02 00:01:27.334156 | orchestrator | 00:01:27.328 STDOUT terraform:  + block_device { 2025-09-02 00:01:27.334167 | orchestrator | 00:01:27.328 STDOUT terraform:  + boot_index = 0 2025-09-02 00:01:27.334171 | orchestrator | 00:01:27.328 STDOUT terraform:  + delete_on_termination = false 2025-09-02 00:01:27.334174 | orchestrator | 00:01:27.328 STDOUT terraform:  + destination_type = "volume" 2025-09-02 00:01:27.334178 | orchestrator | 00:01:27.328 STDOUT terraform:  + multiattach = false 2025-09-02 00:01:27.334182 | orchestrator | 00:01:27.328 STDOUT terraform:  + source_type = "volume" 2025-09-02 00:01:27.334186 | orchestrator | 00:01:27.328 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334190 | orchestrator | 00:01:27.328 STDOUT terraform:  } 2025-09-02 00:01:27.334194 | orchestrator | 00:01:27.328 STDOUT terraform:  + network { 2025-09-02 00:01:27.334197 | orchestrator | 00:01:27.328 STDOUT terraform:  + access_network = false 2025-09-02 00:01:27.334206 | orchestrator | 00:01:27.328 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-02 00:01:27.334210 | orchestrator | 00:01:27.328 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-02 00:01:27.334214 | orchestrator | 00:01:27.328 STDOUT terraform:  + mac = (known after apply) 2025-09-02 00:01:27.334217 | orchestrator | 00:01:27.328 STDOUT terraform:  + name = (known after apply) 2025-09-02 00:01:27.334221 | orchestrator | 00:01:27.328 STDOUT terraform:  + port = (known after apply) 2025-09-02 00:01:27.334225 | orchestrator | 00:01:27.328 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334229 | orchestrator | 00:01:27.329 STDOUT terraform:  } 2025-09-02 00:01:27.334232 | orchestrator | 00:01:27.329 STDOUT terraform:  } 2025-09-02 00:01:27.334236 | orchestrator | 00:01:27.329 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-02 00:01:27.334240 | orchestrator | 00:01:27.329 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-02 00:01:27.334244 | orchestrator | 00:01:27.329 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-02 00:01:27.334254 | orchestrator | 00:01:27.329 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-02 00:01:27.334258 | orchestrator | 00:01:27.329 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-02 00:01:27.334261 | orchestrator | 00:01:27.329 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.334265 | orchestrator | 00:01:27.329 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.334269 | orchestrator | 00:01:27.329 STDOUT terraform:  + config_drive = true 2025-09-02 00:01:27.334273 | orchestrator | 00:01:27.329 STDOUT terraform:  + created = (known after apply) 2025-09-02 00:01:27.334276 | orchestrator | 00:01:27.329 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-02 00:01:27.334280 | orchestrator | 00:01:27.329 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-02 00:01:27.334284 | orchestrator | 00:01:27.329 STDOUT terraform:  + force_delete = false 2025-09-02 00:01:27.334288 | orchestrator | 00:01:27.329 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-02 00:01:27.334295 | orchestrator | 00:01:27.329 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.334299 | orchestrator | 00:01:27.329 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.334302 | orchestrator | 00:01:27.329 STDOUT terraform:  + image_name = (known after apply) 2025-09-02 00:01:27.334306 | orchestrator | 00:01:27.329 STDOUT terraform:  + key_pair = "testbed" 2025-09-02 00:01:27.334310 | orchestrator | 00:01:27.329 STDOUT terraform:  + name = "testbed-node-1" 2025-09-02 00:01:27.334314 | orchestrator | 00:01:27.329 STDOUT terraform:  + power_state = "active" 2025-09-02 00:01:27.334317 | orchestrator | 00:01:27.329 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.334321 | orchestrator | 00:01:27.329 STDOUT terraform:  + security_groups = (known after apply) 2025-09-02 00:01:27.334325 | orchestrator | 00:01:27.329 STDOUT terraform:  + stop_before_destroy = false 2025-09-02 00:01:27.334328 | orchestrator | 00:01:27.329 STDOUT terraform:  + updated = (known after apply) 2025-09-02 00:01:27.334332 | orchestrator | 00:01:27.329 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-02 00:01:27.334336 | orchestrator | 00:01:27.329 STDOUT terraform:  + block_device { 2025-09-02 00:01:27.334340 | orchestrator | 00:01:27.329 STDOUT terraform:  + boot_index = 0 2025-09-02 00:01:27.334343 | orchestrator | 00:01:27.329 STDOUT terraform:  + delete_on_termination = false 2025-09-02 00:01:27.334347 | orchestrator | 00:01:27.329 STDOUT terraform:  + destination_type = "volume" 2025-09-02 00:01:27.334351 | orchestrator | 00:01:27.329 STDOUT terraform:  + multiattach = false 2025-09-02 00:01:27.334355 | orchestrator | 00:01:27.329 STDOUT terraform:  + source_type = "volume" 2025-09-02 00:01:27.334358 | orchestrator | 00:01:27.329 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334362 | orchestrator | 00:01:27.329 STDOUT terraform:  } 2025-09-02 00:01:27.334366 | orchestrator | 00:01:27.329 STDOUT terraform:  + network { 2025-09-02 00:01:27.334369 | orchestrator | 00:01:27.329 STDOUT terraform:  + access_network = false 2025-09-02 00:01:27.334373 | orchestrator | 00:01:27.329 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-02 00:01:27.334377 | orchestrator | 00:01:27.329 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-02 00:01:27.334381 | orchestrator | 00:01:27.329 STDOUT terraform:  + mac = (known after apply) 2025-09-02 00:01:27.334384 | orchestrator | 00:01:27.330 STDOUT terraform:  + name = (known after apply) 2025-09-02 00:01:27.334388 | orchestrator | 00:01:27.330 STDOUT terraform:  + port = (known after apply) 2025-09-02 00:01:27.334392 | orchestrator | 00:01:27.330 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334396 | orchestrator | 00:01:27.330 STDOUT terraform:  } 2025-09-02 00:01:27.334402 | orchestrator | 00:01:27.330 STDOUT terraform:  } 2025-09-02 00:01:27.334406 | orchestrator | 00:01:27.330 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-02 00:01:27.334410 | orchestrator | 00:01:27.330 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-02 00:01:27.334416 | orchestrator | 00:01:27.330 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-02 00:01:27.334420 | orchestrator | 00:01:27.330 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-02 00:01:27.334424 | orchestrator | 00:01:27.330 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-02 00:01:27.334430 | orchestrator | 00:01:27.330 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.334434 | orchestrator | 00:01:27.330 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.334438 | orchestrator | 00:01:27.330 STDOUT terraform:  + config_drive = true 2025-09-02 00:01:27.334442 | orchestrator | 00:01:27.330 STDOUT terraform:  + created = (known after apply) 2025-09-02 00:01:27.334446 | orchestrator | 00:01:27.330 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-02 00:01:27.334449 | orchestrator | 00:01:27.330 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-02 00:01:27.334453 | orchestrator | 00:01:27.330 STDOUT terraform:  + force_delete = false 2025-09-02 00:01:27.334457 | orchestrator | 00:01:27.330 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-02 00:01:27.334461 | orchestrator | 00:01:27.330 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.334464 | orchestrator | 00:01:27.330 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.334468 | orchestrator | 00:01:27.330 STDOUT terraform:  + image_name = (known after apply) 2025-09-02 00:01:27.334472 | orchestrator | 00:01:27.330 STDOUT terraform:  + key_pair = "testbed" 2025-09-02 00:01:27.334476 | orchestrator | 00:01:27.330 STDOUT terraform:  + name = "testbed-node-2" 2025-09-02 00:01:27.334479 | orchestrator | 00:01:27.330 STDOUT terraform:  + power_state = "active" 2025-09-02 00:01:27.334483 | orchestrator | 00:01:27.330 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.334487 | orchestrator | 00:01:27.330 STDOUT terraform:  + security_groups = (known after apply) 2025-09-02 00:01:27.334491 | orchestrator | 00:01:27.330 STDOUT terraform:  + stop_before_destroy = false 2025-09-02 00:01:27.334497 | orchestrator | 00:01:27.330 STDOUT terraform:  + updated = (known after apply) 2025-09-02 00:01:27.334501 | orchestrator | 00:01:27.330 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-02 00:01:27.334504 | orchestrator | 00:01:27.330 STDOUT terraform:  + block_device { 2025-09-02 00:01:27.334508 | orchestrator | 00:01:27.330 STDOUT terraform:  + boot_index = 0 2025-09-02 00:01:27.334512 | orchestrator | 00:01:27.330 STDOUT terraform:  + delete_on_termination = false 2025-09-02 00:01:27.334516 | orchestrator | 00:01:27.330 STDOUT terraform:  + destination_type = "volume" 2025-09-02 00:01:27.334519 | orchestrator | 00:01:27.330 STDOUT terraform:  + multiattach = false 2025-09-02 00:01:27.334523 | orchestrator | 00:01:27.330 STDOUT terraform:  + source_type = "volume" 2025-09-02 00:01:27.334527 | orchestrator | 00:01:27.330 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334534 | orchestrator | 00:01:27.331 STDOUT terraform:  } 2025-09-02 00:01:27.334537 | orchestrator | 00:01:27.331 STDOUT terraform:  + network { 2025-09-02 00:01:27.334541 | orchestrator | 00:01:27.331 STDOUT terraform:  + access_network = false 2025-09-02 00:01:27.334545 | orchestrator | 00:01:27.331 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-02 00:01:27.334549 | orchestrator | 00:01:27.331 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-02 00:01:27.334552 | orchestrator | 00:01:27.331 STDOUT terraform:  + mac = (known after apply) 2025-09-02 00:01:27.334558 | orchestrator | 00:01:27.331 STDOUT terraform:  + name = (known after apply) 2025-09-02 00:01:27.334562 | orchestrator | 00:01:27.331 STDOUT terraform:  + port = (known after apply) 2025-09-02 00:01:27.334566 | orchestrator | 00:01:27.331 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334570 | orchestrator | 00:01:27.331 STDOUT terraform:  } 2025-09-02 00:01:27.334575 | orchestrator | 00:01:27.331 STDOUT terraform:  } 2025-09-02 00:01:27.334579 | orchestrator | 00:01:27.331 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-02 00:01:27.334583 | orchestrator | 00:01:27.331 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-02 00:01:27.334587 | orchestrator | 00:01:27.331 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-02 00:01:27.334591 | orchestrator | 00:01:27.331 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-02 00:01:27.334594 | orchestrator | 00:01:27.331 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-02 00:01:27.334598 | orchestrator | 00:01:27.331 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.334602 | orchestrator | 00:01:27.331 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.334605 | orchestrator | 00:01:27.331 STDOUT terraform:  + config_drive = true 2025-09-02 00:01:27.334609 | orchestrator | 00:01:27.331 STDOUT terraform:  + created = (known after apply) 2025-09-02 00:01:27.334613 | orchestrator | 00:01:27.331 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-02 00:01:27.334617 | orchestrator | 00:01:27.331 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-02 00:01:27.334621 | orchestrator | 00:01:27.331 STDOUT terraform:  + force_delete = false 2025-09-02 00:01:27.334625 | orchestrator | 00:01:27.331 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-02 00:01:27.334629 | orchestrator | 00:01:27.331 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.334633 | orchestrator | 00:01:27.331 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.334637 | orchestrator | 00:01:27.331 STDOUT terraform:  + image_name = (known after apply) 2025-09-02 00:01:27.334641 | orchestrator | 00:01:27.331 STDOUT terraform:  + key_pair = "testbed" 2025-09-02 00:01:27.334645 | orchestrator | 00:01:27.331 STDOUT terraform:  + name = "testbed-node-3" 2025-09-02 00:01:27.334649 | orchestrator | 00:01:27.331 STDOUT terraform:  + power_state = "active" 2025-09-02 00:01:27.334656 | orchestrator | 00:01:27.331 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.334660 | orchestrator | 00:01:27.331 STDOUT terraform:  + security_groups = (known after apply) 2025-09-02 00:01:27.334664 | orchestrator | 00:01:27.331 STDOUT terraform:  + stop_before_destroy = false 2025-09-02 00:01:27.334668 | orchestrator | 00:01:27.331 STDOUT terraform:  + updated = (known after apply) 2025-09-02 00:01:27.334672 | orchestrator | 00:01:27.331 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-02 00:01:27.334678 | orchestrator | 00:01:27.331 STDOUT terraform:  + block_device { 2025-09-02 00:01:27.334682 | orchestrator | 00:01:27.331 STDOUT terraform:  + boot_index = 0 2025-09-02 00:01:27.334685 | orchestrator | 00:01:27.331 STDOUT terraform:  + delete_on_termination = false 2025-09-02 00:01:27.334689 | orchestrator | 00:01:27.331 STDOUT terraform:  + destination_type = "volume" 2025-09-02 00:01:27.334693 | orchestrator | 00:01:27.331 STDOUT terraform:  + multiattach = false 2025-09-02 00:01:27.334697 | orchestrator | 00:01:27.332 STDOUT terraform:  + source_type = "volume" 2025-09-02 00:01:27.334700 | orchestrator | 00:01:27.332 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334704 | orchestrator | 00:01:27.332 STDOUT terraform:  } 2025-09-02 00:01:27.334708 | orchestrator | 00:01:27.332 STDOUT terraform:  + network { 2025-09-02 00:01:27.334718 | orchestrator | 00:01:27.332 STDOUT terraform:  + access_network = false 2025-09-02 00:01:27.334722 | orchestrator | 00:01:27.332 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-02 00:01:27.334726 | orchestrator | 00:01:27.332 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-02 00:01:27.334730 | orchestrator | 00:01:27.332 STDOUT terraform:  + mac = (known after apply) 2025-09-02 00:01:27.334733 | orchestrator | 00:01:27.332 STDOUT terraform:  + name = (known after apply) 2025-09-02 00:01:27.334737 | orchestrator | 00:01:27.332 STDOUT terraform:  + port = (known after apply) 2025-09-02 00:01:27.334741 | orchestrator | 00:01:27.332 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334744 | orchestrator | 00:01:27.332 STDOUT terraform:  } 2025-09-02 00:01:27.334748 | orchestrator | 00:01:27.332 STDOUT terraform:  } 2025-09-02 00:01:27.334752 | orchestrator | 00:01:27.332 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-02 00:01:27.334756 | orchestrator | 00:01:27.332 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-02 00:01:27.334759 | orchestrator | 00:01:27.332 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-02 00:01:27.334763 | orchestrator | 00:01:27.332 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-02 00:01:27.334767 | orchestrator | 00:01:27.332 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-02 00:01:27.334771 | orchestrator | 00:01:27.332 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.334774 | orchestrator | 00:01:27.332 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.334784 | orchestrator | 00:01:27.332 STDOUT terraform:  + config_drive = true 2025-09-02 00:01:27.334788 | orchestrator | 00:01:27.332 STDOUT terraform:  + created = (known after apply) 2025-09-02 00:01:27.334791 | orchestrator | 00:01:27.332 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-02 00:01:27.334795 | orchestrator | 00:01:27.332 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-02 00:01:27.334799 | orchestrator | 00:01:27.332 STDOUT terraform:  + force_delete = false 2025-09-02 00:01:27.334802 | orchestrator | 00:01:27.332 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-02 00:01:27.334806 | orchestrator | 00:01:27.332 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.334810 | orchestrator | 00:01:27.332 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.334814 | orchestrator | 00:01:27.332 STDOUT terraform:  + image_name = (known after apply) 2025-09-02 00:01:27.334817 | orchestrator | 00:01:27.332 STDOUT terraform:  + key_pair = "testbed" 2025-09-02 00:01:27.334821 | orchestrator | 00:01:27.332 STDOUT terraform:  + name = "testbed-node-4" 2025-09-02 00:01:27.334825 | orchestrator | 00:01:27.332 STDOUT terraform:  + power_state = "active" 2025-09-02 00:01:27.334828 | orchestrator | 00:01:27.332 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.334832 | orchestrator | 00:01:27.332 STDOUT terraform:  + security_groups = (known after apply) 2025-09-02 00:01:27.334836 | orchestrator | 00:01:27.332 STDOUT terraform:  + stop_before_destroy = false 2025-09-02 00:01:27.334840 | orchestrator | 00:01:27.332 STDOUT terraform:  + updated = (known after apply) 2025-09-02 00:01:27.334843 | orchestrator | 00:01:27.332 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-02 00:01:27.334847 | orchestrator | 00:01:27.332 STDOUT terraform:  + block_device { 2025-09-02 00:01:27.334851 | orchestrator | 00:01:27.332 STDOUT terraform:  + boot_index = 0 2025-09-02 00:01:27.334855 | orchestrator | 00:01:27.332 STDOUT terraform:  + delete_on_termination = false 2025-09-02 00:01:27.334858 | orchestrator | 00:01:27.332 STDOUT terraform:  + destination_type = "volume" 2025-09-02 00:01:27.334862 | orchestrator | 00:01:27.333 STDOUT terraform:  + multiattach = false 2025-09-02 00:01:27.334868 | orchestrator | 00:01:27.333 STDOUT terraform:  + source_type = "volume" 2025-09-02 00:01:27.334872 | orchestrator | 00:01:27.333 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334875 | orchestrator | 00:01:27.333 STDOUT terraform:  } 2025-09-02 00:01:27.334879 | orchestrator | 00:01:27.333 STDOUT terraform:  + network { 2025-09-02 00:01:27.334883 | orchestrator | 00:01:27.333 STDOUT terraform:  + access_network = false 2025-09-02 00:01:27.334887 | orchestrator | 00:01:27.333 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-02 00:01:27.334890 | orchestrator | 00:01:27.333 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-02 00:01:27.334894 | orchestrator | 00:01:27.333 STDOUT terraform:  + mac = (known after apply) 2025-09-02 00:01:27.334901 | orchestrator | 00:01:27.333 STDOUT terraform:  + name = (known after apply) 2025-09-02 00:01:27.334905 | orchestrator | 00:01:27.333 STDOUT terraform:  + port = (known after apply) 2025-09-02 00:01:27.334908 | orchestrator | 00:01:27.333 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.334912 | orchestrator | 00:01:27.333 STDOUT terraform:  } 2025-09-02 00:01:27.334916 | orchestrator | 00:01:27.333 STDOUT terraform:  } 2025-09-02 00:01:27.334920 | orchestrator | 00:01:27.333 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-02 00:01:27.334924 | orchestrator | 00:01:27.333 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-02 00:01:27.334927 | orchestrator | 00:01:27.333 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-02 00:01:27.334931 | orchestrator | 00:01:27.333 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-02 00:01:27.334935 | orchestrator | 00:01:27.333 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-02 00:01:27.334938 | orchestrator | 00:01:27.333 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.334942 | orchestrator | 00:01:27.333 STDOUT terraform:  + availability_zone = "nova" 2025-09-02 00:01:27.334946 | orchestrator | 00:01:27.333 STDOUT terraform:  + config_drive = true 2025-09-02 00:01:27.334950 | orchestrator | 00:01:27.333 STDOUT terraform:  + created = (known after apply) 2025-09-02 00:01:27.334953 | orchestrator | 00:01:27.333 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-02 00:01:27.334959 | orchestrator | 00:01:27.333 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-02 00:01:27.334966 | orchestrator | 00:01:27.333 STDOUT terraform:  + force_delete = false 2025-09-02 00:01:27.334969 | orchestrator | 00:01:27.333 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-02 00:01:27.334973 | orchestrator | 00:01:27.333 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.334977 | orchestrator | 00:01:27.333 STDOUT terraform:  + image_id = (known after apply) 2025-09-02 00:01:27.334981 | orchestrator | 00:01:27.333 STDOUT terraform:  + image_name = (known after apply) 2025-09-02 00:01:27.334984 | orchestrator | 00:01:27.333 STDOUT terraform:  + key_pair = "testbed" 2025-09-02 00:01:27.334988 | orchestrator | 00:01:27.333 STDOUT terraform:  + name = "testbed-node-5" 2025-09-02 00:01:27.334992 | orchestrator | 00:01:27.333 STDOUT terraform:  + power_state = "active" 2025-09-02 00:01:27.334996 | orchestrator | 00:01:27.333 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.334999 | orchestrator | 00:01:27.333 STDOUT terraform:  + security_groups = (known after apply) 2025-09-02 00:01:27.335003 | orchestrator | 00:01:27.333 STDOUT terraform:  + stop_before_destroy = false 2025-09-02 00:01:27.335007 | orchestrator | 00:01:27.333 STDOUT terraform:  + updated = (known after apply) 2025-09-02 00:01:27.335010 | orchestrator | 00:01:27.333 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-02 00:01:27.335014 | orchestrator | 00:01:27.333 STDOUT terraform:  + block_device { 2025-09-02 00:01:27.335053 | orchestrator | 00:01:27.333 STDOUT terraform:  + boot_index = 0 2025-09-02 00:01:27.335086 | orchestrator | 00:01:27.334 STDOUT terraform:  + delete_on_termination = false 2025-09-02 00:01:27.335134 | orchestrator | 00:01:27.335 STDOUT terraform:  + destination_type = "volume" 2025-09-02 00:01:27.335172 | orchestrator | 00:01:27.335 STDOUT terraform:  + multiattach = false 2025-09-02 00:01:27.335211 | orchestrator | 00:01:27.335 STDOUT terraform:  + source_type = "volume" 2025-09-02 00:01:27.335257 | orchestrator | 00:01:27.335 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.335278 | orchestrator | 00:01:27.335 STDOUT terraform:  } 2025-09-02 00:01:27.335301 | orchestrator | 00:01:27.335 STDOUT terraform:  + network { 2025-09-02 00:01:27.335330 | orchestrator | 00:01:27.335 STDOUT terraform:  + access_network = false 2025-09-02 00:01:27.335368 | orchestrator | 00:01:27.335 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-02 00:01:27.335406 | orchestrator | 00:01:27.335 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-02 00:01:27.335445 | orchestrator | 00:01:27.335 STDOUT terraform:  + mac = (known after apply) 2025-09-02 00:01:27.335484 | orchestrator | 00:01:27.335 STDOUT terraform:  + name = (known after apply) 2025-09-02 00:01:27.335523 | orchestrator | 00:01:27.335 STDOUT terraform:  + port = (known after apply) 2025-09-02 00:01:27.335563 | orchestrator | 00:01:27.335 STDOUT terraform:  + uuid = (known after apply) 2025-09-02 00:01:27.335584 | orchestrator | 00:01:27.335 STDOUT terraform:  } 2025-09-02 00:01:27.335605 | orchestrator | 00:01:27.335 STDOUT terraform:  } 2025-09-02 00:01:27.335647 | orchestrator | 00:01:27.335 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-02 00:01:27.335689 | orchestrator | 00:01:27.335 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-02 00:01:27.335724 | orchestrator | 00:01:27.335 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-02 00:01:27.335760 | orchestrator | 00:01:27.335 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.335788 | orchestrator | 00:01:27.335 STDOUT terraform:  + name = "testbed" 2025-09-02 00:01:27.335820 | orchestrator | 00:01:27.335 STDOUT terraform:  + private_key = (sensitive value) 2025-09-02 00:01:27.335855 | orchestrator | 00:01:27.335 STDOUT terraform:  + public_key = (known after apply) 2025-09-02 00:01:27.335890 | orchestrator | 00:01:27.335 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.335929 | orchestrator | 00:01:27.335 STDOUT terraform:  + user_id = (known after apply) 2025-09-02 00:01:27.335950 | orchestrator | 00:01:27.335 STDOUT terraform:  } 2025-09-02 00:01:27.336008 | orchestrator | 00:01:27.335 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-02 00:01:27.336077 | orchestrator | 00:01:27.336 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-02 00:01:27.336114 | orchestrator | 00:01:27.336 STDOUT terraform:  + device = (known after apply) 2025-09-02 00:01:27.336151 | orchestrator | 00:01:27.336 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.336193 | orchestrator | 00:01:27.336 STDOUT terraform:  + instance_id = (known after apply) 2025-09-02 00:01:27.336230 | orchestrator | 00:01:27.336 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.336268 | orchestrator | 00:01:27.336 STDOUT terraform:  + volume_id = (known after apply) 2025-09-02 00:01:27.336290 | orchestrator | 00:01:27.336 STDOUT terraform:  } 2025-09-02 00:01:27.336348 | orchestrator | 00:01:27.336 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-02 00:01:27.336406 | orchestrator | 00:01:27.336 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-02 00:01:27.336442 | orchestrator | 00:01:27.336 STDOUT terraform:  + device = (known after apply) 2025-09-02 00:01:27.336478 | orchestrator | 00:01:27.336 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.336512 | orchestrator | 00:01:27.336 STDOUT terraform:  + instance_id = (known after apply) 2025-09-02 00:01:27.336549 | orchestrator | 00:01:27.336 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.336584 | orchestrator | 00:01:27.336 STDOUT terraform:  + volume_id = (known after apply) 2025-09-02 00:01:27.336604 | orchestrator | 00:01:27.336 STDOUT terraform:  } 2025-09-02 00:01:27.336661 | orchestrator | 00:01:27.336 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-02 00:01:27.336719 | orchestrator | 00:01:27.336 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-02 00:01:27.336755 | orchestrator | 00:01:27.336 STDOUT terraform:  + device = (known after apply) 2025-09-02 00:01:27.336791 | orchestrator | 00:01:27.336 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.336827 | orchestrator | 00:01:27.336 STDOUT terraform:  + instance_id = (known after apply) 2025-09-02 00:01:27.336865 | orchestrator | 00:01:27.336 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.336900 | orchestrator | 00:01:27.336 STDOUT terraform:  + volume_id = (known after apply) 2025-09-02 00:01:27.336921 | orchestrator | 00:01:27.336 STDOUT terraform:  } 2025-09-02 00:01:27.336978 | orchestrator | 00:01:27.336 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-02 00:01:27.337046 | orchestrator | 00:01:27.336 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-02 00:01:27.337086 | orchestrator | 00:01:27.337 STDOUT terraform:  + device = (known after apply) 2025-09-02 00:01:27.337122 | orchestrator | 00:01:27.337 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.337157 | orchestrator | 00:01:27.337 STDOUT terraform:  + instance_id = (known after apply) 2025-09-02 00:01:27.337196 | orchestrator | 00:01:27.337 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.337231 | orchestrator | 00:01:27.337 STDOUT terraform:  + volume_id = (known after apply) 2025-09-02 00:01:27.337252 | orchestrator | 00:01:27.337 STDOUT terraform:  } 2025-09-02 00:01:27.337309 | orchestrator | 00:01:27.337 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-02 00:01:27.337371 | orchestrator | 00:01:27.337 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-02 00:01:27.337407 | orchestrator | 00:01:27.337 STDOUT terraform:  + device = (known after apply) 2025-09-02 00:01:27.337445 | orchestrator | 00:01:27.337 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.337480 | orchestrator | 00:01:27.337 STDOUT terraform:  + instance_id = (known after apply) 2025-09-02 00:01:27.337517 | orchestrator | 00:01:27.337 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.337552 | orchestrator | 00:01:27.337 STDOUT terraform:  + volume_id = (known after apply) 2025-09-02 00:01:27.337573 | orchestrator | 00:01:27.337 STDOUT terraform:  } 2025-09-02 00:01:27.337630 | orchestrator | 00:01:27.337 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-02 00:01:27.337688 | orchestrator | 00:01:27.337 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-02 00:01:27.337723 | orchestrator | 00:01:27.337 STDOUT terraform:  + device = (known after apply) 2025-09-02 00:01:27.337758 | orchestrator | 00:01:27.337 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.337794 | orchestrator | 00:01:27.337 STDOUT terraform:  + instance_id = (known after apply) 2025-09-02 00:01:27.337832 | orchestrator | 00:01:27.337 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.337868 | orchestrator | 00:01:27.337 STDOUT terraform:  + volume_id = (known after apply) 2025-09-02 00:01:27.337889 | orchestrator | 00:01:27.337 STDOUT terraform:  } 2025-09-02 00:01:27.337945 | orchestrator | 00:01:27.337 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-02 00:01:27.338001 | orchestrator | 00:01:27.337 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-02 00:01:27.338083 | orchestrator | 00:01:27.338 STDOUT terraform:  + device = (known after apply) 2025-09-02 00:01:27.338140 | orchestrator | 00:01:27.338 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.342103 | orchestrator | 00:01:27.342 STDOUT terraform:  + instance_id = (known after apply) 2025-09-02 00:01:27.342143 | orchestrator | 00:01:27.342 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.342179 | orchestrator | 00:01:27.342 STDOUT terraform:  + volume_id = (known after apply) 2025-09-02 00:01:27.342200 | orchestrator | 00:01:27.342 STDOUT terraform:  } 2025-09-02 00:01:27.342259 | orchestrator | 00:01:27.342 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-02 00:01:27.342314 | orchestrator | 00:01:27.342 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-02 00:01:27.342351 | orchestrator | 00:01:27.342 STDOUT terraform:  + device = (known after apply) 2025-09-02 00:01:27.342387 | orchestrator | 00:01:27.342 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.342423 | orchestrator | 00:01:27.342 STDOUT terraform:  + instance_id = (known after apply) 2025-09-02 00:01:27.342459 | orchestrator | 00:01:27.342 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.342501 | orchestrator | 00:01:27.342 STDOUT terraform:  + volume_id = (known after apply) 2025-09-02 00:01:27.342523 | orchestrator | 00:01:27.342 STDOUT terraform:  } 2025-09-02 00:01:27.342579 | orchestrator | 00:01:27.342 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-02 00:01:27.342635 | orchestrator | 00:01:27.342 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-02 00:01:27.342670 | orchestrator | 00:01:27.342 STDOUT terraform:  + device = (known after apply) 2025-09-02 00:01:27.342706 | orchestrator | 00:01:27.342 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.342742 | orchestrator | 00:01:27.342 STDOUT terraform:  + instance_id = (known after apply) 2025-09-02 00:01:27.342778 | orchestrator | 00:01:27.342 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.342814 | orchestrator | 00:01:27.342 STDOUT terraform:  + volume_id = (known after apply) 2025-09-02 00:01:27.342836 | orchestrator | 00:01:27.342 STDOUT terraform:  } 2025-09-02 00:01:27.342911 | orchestrator | 00:01:27.342 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-02 00:01:27.342984 | orchestrator | 00:01:27.342 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-02 00:01:27.343030 | orchestrator | 00:01:27.342 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-02 00:01:27.343067 | orchestrator | 00:01:27.343 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-02 00:01:27.343103 | orchestrator | 00:01:27.343 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.343138 | orchestrator | 00:01:27.343 STDOUT terraform:  + port_id = (known after apply) 2025-09-02 00:01:27.343174 | orchestrator | 00:01:27.343 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.343194 | orchestrator | 00:01:27.343 STDOUT terraform:  } 2025-09-02 00:01:27.343249 | orchestrator | 00:01:27.343 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-02 00:01:27.343303 | orchestrator | 00:01:27.343 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-02 00:01:27.343338 | orchestrator | 00:01:27.343 STDOUT terraform:  + address = (known after apply) 2025-09-02 00:01:27.343370 | orchestrator | 00:01:27.343 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.343402 | orchestrator | 00:01:27.343 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-02 00:01:27.343437 | orchestrator | 00:01:27.343 STDOUT terraform:  + dns_name = (known after apply) 2025-09-02 00:01:27.343469 | orchestrator | 00:01:27.343 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-02 00:01:27.343501 | orchestrator | 00:01:27.343 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.343530 | orchestrator | 00:01:27.343 STDOUT terraform:  + pool = "public" 2025-09-02 00:01:27.343562 | orchestrator | 00:01:27.343 STDOUT terraform:  + port_id = (known after apply) 2025-09-02 00:01:27.343594 | orchestrator | 00:01:27.343 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.343626 | orchestrator | 00:01:27.343 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-02 00:01:27.343664 | orchestrator | 00:01:27.343 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.343687 | orchestrator | 00:01:27.343 STDOUT terraform:  } 2025-09-02 00:01:27.343741 | orchestrator | 00:01:27.343 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-02 00:01:27.343792 | orchestrator | 00:01:27.343 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-02 00:01:27.343835 | orchestrator | 00:01:27.343 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-02 00:01:27.343878 | orchestrator | 00:01:27.343 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.343909 | orchestrator | 00:01:27.343 STDOUT terraform:  + availability_zone_hints = [ 2025-09-02 00:01:27.343931 | orchestrator | 00:01:27.343 STDOUT terraform:  + "nova", 2025-09-02 00:01:27.343952 | orchestrator | 00:01:27.343 STDOUT terraform:  ] 2025-09-02 00:01:27.343997 | orchestrator | 00:01:27.343 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-02 00:01:27.344072 | orchestrator | 00:01:27.344 STDOUT terraform:  + external = (known after apply) 2025-09-02 00:01:27.344119 | orchestrator | 00:01:27.344 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.344162 | orchestrator | 00:01:27.344 STDOUT terraform:  + mtu = (known after apply) 2025-09-02 00:01:27.344208 | orchestrator | 00:01:27.344 STDOUT terraform:  + name = "net-testbed-management" 2025-09-02 00:01:27.344250 | orchestrator | 00:01:27.344 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-02 00:01:27.344292 | orchestrator | 00:01:27.344 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-02 00:01:27.344336 | orchestrator | 00:01:27.344 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.344380 | orchestrator | 00:01:27.344 STDOUT terraform:  + shared = (known after apply) 2025-09-02 00:01:27.344424 | orchestrator | 00:01:27.344 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.344466 | orchestrator | 00:01:27.344 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-02 00:01:27.344496 | orchestrator | 00:01:27.344 STDOUT terraform:  + segments (known after apply) 2025-09-02 00:01:27.344517 | orchestrator | 00:01:27.344 STDOUT terraform:  } 2025-09-02 00:01:27.344569 | orchestrator | 00:01:27.344 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-02 00:01:27.344620 | orchestrator | 00:01:27.344 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-02 00:01:27.344669 | orchestrator | 00:01:27.344 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-02 00:01:27.344713 | orchestrator | 00:01:27.344 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-02 00:01:27.344754 | orchestrator | 00:01:27.344 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-02 00:01:27.344795 | orchestrator | 00:01:27.344 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.344843 | orchestrator | 00:01:27.344 STDOUT terraform:  + device_id = (known after apply) 2025-09-02 00:01:27.344891 | orchestrator | 00:01:27.344 STDOUT terraform:  + device_owner = (known after apply) 2025-09-02 00:01:27.344933 | orchestrator | 00:01:27.344 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-02 00:01:27.344978 | orchestrator | 00:01:27.344 STDOUT terraform:  + dns_name = (known after apply) 2025-09-02 00:01:27.345033 | orchestrator | 00:01:27.344 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.345124 | orchestrator | 00:01:27.345 STDOUT terraform:  + mac_address = (known after apply) 2025-09-02 00:01:27.345200 | orchestrator | 00:01:27.345 STDOUT terraform:  + network_id = (known after apply) 2025-09-02 00:01:27.345291 | orchestrator | 00:01:27.345 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-02 00:01:27.345440 | orchestrator | 00:01:27.345 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-02 00:01:27.345542 | orchestrator | 00:01:27.345 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.345905 | orchestrator | 00:01:27.345 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-02 00:01:27.346302 | orchestrator | 00:01:27.345 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.346417 | orchestrator | 00:01:27.346 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.346514 | orchestrator | 00:01:27.346 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-02 00:01:27.346760 | orchestrator | 00:01:27.346 STDOUT terraform:  } 2025-09-02 00:01:27.346974 | orchestrator | 00:01:27.346 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.347074 | orchestrator | 00:01:27.347 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-02 00:01:27.347172 | orchestrator | 00:01:27.347 STDOUT terraform:  } 2025-09-02 00:01:27.347267 | orchestrator | 00:01:27.347 STDOUT terraform:  + binding (known after apply) 2025-09-02 00:01:27.347348 | orchestrator | 00:01:27.347 STDOUT terraform:  + fixed_ip { 2025-09-02 00:01:27.347417 | orchestrator | 00:01:27.347 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-02 00:01:27.347550 | orchestrator | 00:01:27.347 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-02 00:01:27.347628 | orchestrator | 00:01:27.347 STDOUT terraform:  } 2025-09-02 00:01:27.347681 | orchestrator | 00:01:27.347 STDOUT terraform:  } 2025-09-02 00:01:27.347755 | orchestrator | 00:01:27.347 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-02 00:01:27.347978 | orchestrator | 00:01:27.347 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-02 00:01:27.348198 | orchestrator | 00:01:27.348 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-02 00:01:27.348329 | orchestrator | 00:01:27.348 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-02 00:01:27.348401 | orchestrator | 00:01:27.348 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-02 00:01:27.348558 | orchestrator | 00:01:27.348 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.348662 | orchestrator | 00:01:27.348 STDOUT terraform:  + device_id = (known after apply) 2025-09-02 00:01:27.348720 | orchestrator | 00:01:27.348 STDOUT terraform:  + device_owner = (known after apply) 2025-09-02 00:01:27.348764 | orchestrator | 00:01:27.348 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-02 00:01:27.348838 | orchestrator | 00:01:27.348 STDOUT terraform:  + dns_name = (known after apply) 2025-09-02 00:01:27.348942 | orchestrator | 00:01:27.348 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.349134 | orchestrator | 00:01:27.349 STDOUT terraform:  + mac_address = (known after apply) 2025-09-02 00:01:27.349222 | orchestrator | 00:01:27.349 STDOUT terraform:  + network_id = (known after apply) 2025-09-02 00:01:27.349318 | orchestrator | 00:01:27.349 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-02 00:01:27.349446 | orchestrator | 00:01:27.349 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-02 00:01:27.349560 | orchestrator | 00:01:27.349 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.349680 | orchestrator | 00:01:27.349 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-02 00:01:27.349750 | orchestrator | 00:01:27.349 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.349886 | orchestrator | 00:01:27.349 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.349993 | orchestrator | 00:01:27.349 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-02 00:01:27.350106 | orchestrator | 00:01:27.350 STDOUT terraform:  } 2025-09-02 00:01:27.350190 | orchestrator | 00:01:27.350 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.350283 | orchestrator | 00:01:27.350 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-02 00:01:27.350420 | orchestrator | 00:01:27.350 STDOUT terraform:  } 2025-09-02 00:01:27.350452 | orchestrator | 00:01:27.350 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.350567 | orchestrator | 00:01:27.350 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-02 00:01:27.350627 | orchestrator | 00:01:27.350 STDOUT terraform:  } 2025-09-02 00:01:27.350734 | orchestrator | 00:01:27.350 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.351086 | orchestrator | 00:01:27.350 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-02 00:01:27.351140 | orchestrator | 00:01:27.351 STDOUT terraform:  } 2025-09-02 00:01:27.351243 | orchestrator | 00:01:27.351 STDOUT terraform:  + binding (known after apply) 2025-09-02 00:01:27.351285 | orchestrator | 00:01:27.351 STDOUT terraform:  + fixed_ip { 2025-09-02 00:01:27.351428 | orchestrator | 00:01:27.351 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-02 00:01:27.351501 | orchestrator | 00:01:27.351 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-02 00:01:27.351569 | orchestrator | 00:01:27.351 STDOUT terraform:  } 2025-09-02 00:01:27.351647 | orchestrator | 00:01:27.351 STDOUT terraform:  } 2025-09-02 00:01:27.351739 | orchestrator | 00:01:27.351 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-02 00:01:27.351907 | orchestrator | 00:01:27.351 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-02 00:01:27.352162 | orchestrator | 00:01:27.352 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-02 00:01:27.352292 | orchestrator | 00:01:27.352 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-02 00:01:27.352398 | orchestrator | 00:01:27.352 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-02 00:01:27.352532 | orchestrator | 00:01:27.352 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.352650 | orchestrator | 00:01:27.352 STDOUT terraform:  + device_id = (known after apply) 2025-09-02 00:01:27.352785 | orchestrator | 00:01:27.352 STDOUT terraform:  + device_owner = (known after apply) 2025-09-02 00:01:27.352851 | orchestrator | 00:01:27.352 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-02 00:01:27.352899 | orchestrator | 00:01:27.352 STDOUT terraform:  + dns_name = (known after apply) 2025-09-02 00:01:27.352974 | orchestrator | 00:01:27.352 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.353119 | orchestrator | 00:01:27.353 STDOUT terraform:  + mac_address = (known after apply) 2025-09-02 00:01:27.353400 | orchestrator | 00:01:27.353 STDOUT terraform:  + network_id = (known after apply) 2025-09-02 00:01:27.353523 | orchestrator | 00:01:27.353 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-02 00:01:27.353646 | orchestrator | 00:01:27.353 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-02 00:01:27.353901 | orchestrator | 00:01:27.353 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.354047 | orchestrator | 00:01:27.353 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-02 00:01:27.354159 | orchestrator | 00:01:27.354 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.354272 | orchestrator | 00:01:27.354 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.354399 | orchestrator | 00:01:27.354 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-02 00:01:27.354454 | orchestrator | 00:01:27.354 STDOUT terraform:  } 2025-09-02 00:01:27.354558 | orchestrator | 00:01:27.354 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.354630 | orchestrator | 00:01:27.354 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-02 00:01:27.354729 | orchestrator | 00:01:27.354 STDOUT terraform:  } 2025-09-02 00:01:27.354861 | orchestrator | 00:01:27.354 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.354954 | orchestrator | 00:01:27.354 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-02 00:01:27.355067 | orchestrator | 00:01:27.354 STDOUT terraform:  } 2025-09-02 00:01:27.355189 | orchestrator | 00:01:27.355 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.355327 | orchestrator | 00:01:27.355 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-02 00:01:27.355410 | orchestrator | 00:01:27.355 STDOUT terraform:  } 2025-09-02 00:01:27.355472 | orchestrator | 00:01:27.355 STDOUT terraform:  + binding (known after apply) 2025-09-02 00:01:27.355575 | orchestrator | 00:01:27.355 STDOUT terraform:  + fixed_ip { 2025-09-02 00:01:27.355737 | orchestrator | 00:01:27.355 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-02 00:01:27.355926 | orchestrator | 00:01:27.355 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-02 00:01:27.356316 | orchestrator | 00:01:27.355 STDOUT terraform:  } 2025-09-02 00:01:27.356508 | orchestrator | 00:01:27.356 STDOUT terraform:  } 2025-09-02 00:01:27.356577 | orchestrator | 00:01:27.356 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-02 00:01:27.356809 | orchestrator | 00:01:27.356 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-02 00:01:27.356953 | orchestrator | 00:01:27.356 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-02 00:01:27.357128 | orchestrator | 00:01:27.357 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-02 00:01:27.357291 | orchestrator | 00:01:27.357 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-02 00:01:27.357373 | orchestrator | 00:01:27.357 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.357580 | orchestrator | 00:01:27.357 STDOUT terraform:  + device_id = (known after apply) 2025-09-02 00:01:27.357711 | orchestrator | 00:01:27.357 STDOUT terraform:  + device_owner = (known after apply) 2025-09-02 00:01:27.357850 | orchestrator | 00:01:27.357 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-02 00:01:27.357980 | orchestrator | 00:01:27.357 STDOUT terraform:  + dns_name = (known after apply) 2025-09-02 00:01:27.358086 | orchestrator | 00:01:27.358 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.358136 | orchestrator | 00:01:27.358 STDOUT terraform:  + mac_address = (known after apply) 2025-09-02 00:01:27.358216 | orchestrator | 00:01:27.358 STDOUT terraform:  + network_id = (known after apply) 2025-09-02 00:01:27.358378 | orchestrator | 00:01:27.358 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-02 00:01:27.358556 | orchestrator | 00:01:27.358 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-02 00:01:27.358649 | orchestrator | 00:01:27.358 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.363808 | orchestrator | 00:01:27.358 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-02 00:01:27.363901 | orchestrator | 00:01:27.363 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.363935 | orchestrator | 00:01:27.363 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.363979 | orchestrator | 00:01:27.363 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-02 00:01:27.364004 | orchestrator | 00:01:27.363 STDOUT terraform:  } 2025-09-02 00:01:27.364062 | orchestrator | 00:01:27.364 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.364104 | orchestrator | 00:01:27.364 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-02 00:01:27.364127 | orchestrator | 00:01:27.364 STDOUT terraform:  } 2025-09-02 00:01:27.364161 | orchestrator | 00:01:27.364 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.364198 | orchestrator | 00:01:27.364 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-02 00:01:27.364232 | orchestrator | 00:01:27.364 STDOUT terraform:  } 2025-09-02 00:01:27.364260 | orchestrator | 00:01:27.364 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.364297 | orchestrator | 00:01:27.364 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-02 00:01:27.364323 | orchestrator | 00:01:27.364 STDOUT terraform:  } 2025-09-02 00:01:27.364357 | orchestrator | 00:01:27.364 STDOUT terraform:  + binding (known after apply) 2025-09-02 00:01:27.364382 | orchestrator | 00:01:27.364 STDOUT terraform:  + fixed_ip { 2025-09-02 00:01:27.364415 | orchestrator | 00:01:27.364 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-02 00:01:27.364457 | orchestrator | 00:01:27.364 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-02 00:01:27.364480 | orchestrator | 00:01:27.364 STDOUT terraform:  } 2025-09-02 00:01:27.364504 | orchestrator | 00:01:27.364 STDOUT terraform:  } 2025-09-02 00:01:27.364562 | orchestrator | 00:01:27.364 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-02 00:01:27.364619 | orchestrator | 00:01:27.364 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-02 00:01:27.364663 | orchestrator | 00:01:27.364 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-02 00:01:27.364724 | orchestrator | 00:01:27.364 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-02 00:01:27.365040 | orchestrator | 00:01:27.364 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-02 00:01:27.365541 | orchestrator | 00:01:27.365 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.365983 | orchestrator | 00:01:27.365 STDOUT terraform:  + device_id = (known after apply) 2025-09-02 00:01:27.366389 | orchestrator | 00:01:27.366 STDOUT terraform:  + device_owner = (known after apply) 2025-09-02 00:01:27.366491 | orchestrator | 00:01:27.366 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-02 00:01:27.366499 | orchestrator | 00:01:27.366 STDOUT terraform:  + dns_name = (known after apply) 2025-09-02 00:01:27.366576 | orchestrator | 00:01:27.366 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.366584 | orchestrator | 00:01:27.366 STDOUT terraform:  + mac_address = (known after apply) 2025-09-02 00:01:27.366626 | orchestrator | 00:01:27.366 STDOUT terraform:  + network_id = (known after apply) 2025-09-02 00:01:27.366686 | orchestrator | 00:01:27.366 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-02 00:01:27.366694 | orchestrator | 00:01:27.366 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-02 00:01:27.366739 | orchestrator | 00:01:27.366 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.366781 | orchestrator | 00:01:27.366 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-02 00:01:27.366822 | orchestrator | 00:01:27.366 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.366835 | orchestrator | 00:01:27.366 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.366868 | orchestrator | 00:01:27.366 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-02 00:01:27.366879 | orchestrator | 00:01:27.366 STDOUT terraform:  } 2025-09-02 00:01:27.366885 | orchestrator | 00:01:27.366 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.366926 | orchestrator | 00:01:27.366 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-02 00:01:27.366932 | orchestrator | 00:01:27.366 STDOUT terraform:  } 2025-09-02 00:01:27.366938 | orchestrator | 00:01:27.366 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.366987 | orchestrator | 00:01:27.366 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-02 00:01:27.366993 | orchestrator | 00:01:27.366 STDOUT terraform:  } 2025-09-02 00:01:27.366999 | orchestrator | 00:01:27.366 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.367057 | orchestrator | 00:01:27.366 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-02 00:01:27.367063 | orchestrator | 00:01:27.367 STDOUT terraform:  } 2025-09-02 00:01:27.367069 | orchestrator | 00:01:27.367 STDOUT terraform:  + binding (known after apply) 2025-09-02 00:01:27.367073 | orchestrator | 00:01:27.367 STDOUT terraform:  + fixed_ip { 2025-09-02 00:01:27.367121 | orchestrator | 00:01:27.367 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-02 00:01:27.367129 | orchestrator | 00:01:27.367 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-02 00:01:27.367133 | orchestrator | 00:01:27.367 STDOUT terraform:  } 2025-09-02 00:01:27.367139 | orchestrator | 00:01:27.367 STDOUT terraform:  } 2025-09-02 00:01:27.367216 | orchestrator | 00:01:27.367 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-02 00:01:27.367277 | orchestrator | 00:01:27.367 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-02 00:01:27.367283 | orchestrator | 00:01:27.367 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-02 00:01:27.367321 | orchestrator | 00:01:27.367 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-02 00:01:27.367377 | orchestrator | 00:01:27.367 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-02 00:01:27.367383 | orchestrator | 00:01:27.367 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.367418 | orchestrator | 00:01:27.367 STDOUT terraform:  + device_id = (known after apply) 2025-09-02 00:01:27.367462 | orchestrator | 00:01:27.367 STDOUT terraform:  + device_owner = (known after apply) 2025-09-02 00:01:27.367497 | orchestrator | 00:01:27.367 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-02 00:01:27.367625 | orchestrator | 00:01:27.367 STDOUT terraform:  + dns_name = (known after apply) 2025-09-02 00:01:27.367658 | orchestrator | 00:01:27.367 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.367671 | orchestrator | 00:01:27.367 STDOUT terraform:  + mac_address = (known after apply) 2025-09-02 00:01:27.367717 | orchestrator | 00:01:27.367 STDOUT terraform:  + network_id = (known after apply) 2025-09-02 00:01:27.367776 | orchestrator | 00:01:27.367 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-02 00:01:27.367831 | orchestrator | 00:01:27.367 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-02 00:01:27.367871 | orchestrator | 00:01:27.367 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.367926 | orchestrator | 00:01:27.367 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-02 00:01:27.367971 | orchestrator | 00:01:27.367 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.367975 | orchestrator | 00:01:27.367 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.367979 | orchestrator | 00:01:27.367 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-02 00:01:27.367983 | orchestrator | 00:01:27.367 STDOUT terraform:  } 2025-09-02 00:01:27.367987 | orchestrator | 00:01:27.367 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.367991 | orchestrator | 00:01:27.367 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-02 00:01:27.367995 | orchestrator | 00:01:27.367 STDOUT terraform:  } 2025-09-02 00:01:27.367998 | orchestrator | 00:01:27.367 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.368002 | orchestrator | 00:01:27.367 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-02 00:01:27.368006 | orchestrator | 00:01:27.367 STDOUT terraform:  } 2025-09-02 00:01:27.368012 | orchestrator | 00:01:27.367 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.368016 | orchestrator | 00:01:27.367 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-02 00:01:27.368028 | orchestrator | 00:01:27.367 STDOUT terraform:  } 2025-09-02 00:01:27.368032 | orchestrator | 00:01:27.367 STDOUT terraform:  + binding (known after apply) 2025-09-02 00:01:27.368036 | orchestrator | 00:01:27.367 STDOUT terraform:  + fixed_ip { 2025-09-02 00:01:27.368040 | orchestrator | 00:01:27.367 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-02 00:01:27.368114 | orchestrator | 00:01:27.368 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-02 00:01:27.368120 | orchestrator | 00:01:27.368 STDOUT terraform:  } 2025-09-02 00:01:27.368209 | orchestrator | 00:01:27.368 STDOUT terraform:  } 2025-09-02 00:01:27.368268 | orchestrator | 00:01:27.368 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-02 00:01:27.368388 | orchestrator | 00:01:27.368 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-02 00:01:27.368463 | orchestrator | 00:01:27.368 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-02 00:01:27.368520 | orchestrator | 00:01:27.368 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-02 00:01:27.368524 | orchestrator | 00:01:27.368 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-02 00:01:27.368530 | orchestrator | 00:01:27.368 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.368534 | orchestrator | 00:01:27.368 STDOUT terraform:  + device_id = (known after apply) 2025-09-02 00:01:27.368538 | orchestrator | 00:01:27.368 STDOUT terraform:  + device_owner = (known after apply) 2025-09-02 00:01:27.368542 | orchestrator | 00:01:27.368 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-02 00:01:27.368554 | orchestrator | 00:01:27.368 STDOUT terraform:  + dns_name = (known after apply) 2025-09-02 00:01:27.368558 | orchestrator | 00:01:27.368 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.368561 | orchestrator | 00:01:27.368 STDOUT terraform:  + mac_address = (known after apply) 2025-09-02 00:01:27.368565 | orchestrator | 00:01:27.368 STDOUT terraform:  + network_id = (known after apply) 2025-09-02 00:01:27.368569 | orchestrator | 00:01:27.368 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-02 00:01:27.368575 | orchestrator | 00:01:27.368 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-02 00:01:27.368629 | orchestrator | 00:01:27.368 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.368634 | orchestrator | 00:01:27.368 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-02 00:01:27.368700 | orchestrator | 00:01:27.368 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.368705 | orchestrator | 00:01:27.368 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.368711 | orchestrator | 00:01:27.368 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-02 00:01:27.368715 | orchestrator | 00:01:27.368 STDOUT terraform:  } 2025-09-02 00:01:27.368772 | orchestrator | 00:01:27.368 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.368781 | orchestrator | 00:01:27.368 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-02 00:01:27.368785 | orchestrator | 00:01:27.368 STDOUT terraform:  } 2025-09-02 00:01:27.368791 | orchestrator | 00:01:27.368 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.368818 | orchestrator | 00:01:27.368 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-02 00:01:27.368825 | orchestrator | 00:01:27.368 STDOUT terraform:  } 2025-09-02 00:01:27.368853 | orchestrator | 00:01:27.368 STDOUT terraform:  + allowed_address_pairs { 2025-09-02 00:01:27.368899 | orchestrator | 00:01:27.368 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-02 00:01:27.368908 | orchestrator | 00:01:27.368 STDOUT terraform:  } 2025-09-02 00:01:27.368914 | orchestrator | 00:01:27.368 STDOUT terraform:  + binding (known after apply) 2025-09-02 00:01:27.368918 | orchestrator | 00:01:27.368 STDOUT terraform:  + fixed_ip { 2025-09-02 00:01:27.369037 | orchestrator | 00:01:27.368 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-02 00:01:27.369048 | orchestrator | 00:01:27.368 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-02 00:01:27.369052 | orchestrator | 00:01:27.368 STDOUT terraform:  } 2025-09-02 00:01:27.369056 | orchestrator | 00:01:27.368 STDOUT terraform:  } 2025-09-02 00:01:27.369059 | orchestrator | 00:01:27.368 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-02 00:01:27.369126 | orchestrator | 00:01:27.369 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-02 00:01:27.369132 | orchestrator | 00:01:27.369 STDOUT terraform:  + force_destroy = false 2025-09-02 00:01:27.369136 | orchestrator | 00:01:27.369 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.369146 | orchestrator | 00:01:27.369 STDOUT terraform:  + port_id = (known after apply) 2025-09-02 00:01:27.369208 | orchestrator | 00:01:27.369 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.369216 | orchestrator | 00:01:27.369 STDOUT terraform:  + router_id = (known after apply) 2025-09-02 00:01:27.369222 | orchestrator | 00:01:27.369 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-02 00:01:27.369228 | orchestrator | 00:01:27.369 STDOUT terraform:  } 2025-09-02 00:01:27.369289 | orchestrator | 00:01:27.369 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-02 00:01:27.369300 | orchestrator | 00:01:27.369 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-02 00:01:27.369366 | orchestrator | 00:01:27.369 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-02 00:01:27.369375 | orchestrator | 00:01:27.369 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.369383 | orchestrator | 00:01:27.369 STDOUT terraform:  + availability_zone_hints = [ 2025-09-02 00:01:27.369450 | orchestrator | 00:01:27.369 STDOUT terraform:  + "nova", 2025-09-02 00:01:27.369456 | orchestrator | 00:01:27.369 STDOUT terraform:  ] 2025-09-02 00:01:27.369460 | orchestrator | 00:01:27.369 STDOUT terraform:  + distributed = (known after apply) 2025-09-02 00:01:27.369537 | orchestrator | 00:01:27.369 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-02 00:01:27.369547 | orchestrator | 00:01:27.369 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-02 00:01:27.369638 | orchestrator | 00:01:27.369 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-02 00:01:27.369664 | orchestrator | 00:01:27.369 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.369742 | orchestrator | 00:01:27.369 STDOUT terraform:  + name = "testbed" 2025-09-02 00:01:27.369797 | orchestrator | 00:01:27.369 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.369885 | orchestrator | 00:01:27.369 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.369942 | orchestrator | 00:01:27.369 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-02 00:01:27.370003 | orchestrator | 00:01:27.369 STDOUT terraform:  } 2025-09-02 00:01:27.370095 | orchestrator | 00:01:27.369 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-02 00:01:27.370143 | orchestrator | 00:01:27.369 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-02 00:01:27.370150 | orchestrator | 00:01:27.369 STDOUT terraform:  + description = "ssh" 2025-09-02 00:01:27.370154 | orchestrator | 00:01:27.369 STDOUT terraform:  + direction = "ingress" 2025-09-02 00:01:27.370158 | orchestrator | 00:01:27.369 STDOUT terraform:  + ethertype = "IPv4" 2025-09-02 00:01:27.370162 | orchestrator | 00:01:27.369 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.370165 | orchestrator | 00:01:27.369 STDOUT terraform:  + port_range_max = 22 2025-09-02 00:01:27.370169 | orchestrator | 00:01:27.369 STDOUT terraform:  + port_range_min = 22 2025-09-02 00:01:27.370177 | orchestrator | 00:01:27.369 STDOUT terraform:  + protocol = "tcp" 2025-09-02 00:01:27.370181 | orchestrator | 00:01:27.369 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.370185 | orchestrator | 00:01:27.370 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-02 00:01:27.370189 | orchestrator | 00:01:27.370 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-02 00:01:27.370195 | orchestrator | 00:01:27.370 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-02 00:01:27.370199 | orchestrator | 00:01:27.370 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-02 00:01:27.370203 | orchestrator | 00:01:27.370 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.370206 | orchestrator | 00:01:27.370 STDOUT terraform:  } 2025-09-02 00:01:27.370304 | orchestrator | 00:01:27.370 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-02 00:01:27.370485 | orchestrator | 00:01:27.370 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-02 00:01:27.370493 | orchestrator | 00:01:27.370 STDOUT terraform:  + description = "wireguard" 2025-09-02 00:01:27.370497 | orchestrator | 00:01:27.370 STDOUT terraform:  + direction = "ingress" 2025-09-02 00:01:27.370501 | orchestrator | 00:01:27.370 STDOUT terraform:  + ethertype = "IPv4" 2025-09-02 00:01:27.370505 | orchestrator | 00:01:27.370 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.370509 | orchestrator | 00:01:27.370 STDOUT terraform:  + port_range_max = 51820 2025-09-02 00:01:27.370512 | orchestrator | 00:01:27.370 STDOUT terraform:  + port_range_min = 51820 2025-09-02 00:01:27.370518 | orchestrator | 00:01:27.370 STDOUT terraform:  + protocol = "udp" 2025-09-02 00:01:27.370522 | orchestrator | 00:01:27.370 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.370526 | orchestrator | 00:01:27.370 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-02 00:01:27.370621 | orchestrator | 00:01:27.370 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-02 00:01:27.370691 | orchestrator | 00:01:27.370 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-02 00:01:27.370701 | orchestrator | 00:01:27.370 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-02 00:01:27.370705 | orchestrator | 00:01:27.370 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.370709 | orchestrator | 00:01:27.370 STDOUT terraform:  } 2025-09-02 00:01:27.370713 | orchestrator | 00:01:27.370 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-02 00:01:27.370756 | orchestrator | 00:01:27.370 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-02 00:01:27.370813 | orchestrator | 00:01:27.370 STDOUT terraform:  + direction = "ingress" 2025-09-02 00:01:27.370819 | orchestrator | 00:01:27.370 STDOUT terraform:  + ethertype = "IPv4" 2025-09-02 00:01:27.370829 | orchestrator | 00:01:27.370 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.370879 | orchestrator | 00:01:27.370 STDOUT terraform:  + protocol = "tcp" 2025-09-02 00:01:27.370936 | orchestrator | 00:01:27.370 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.370943 | orchestrator | 00:01:27.370 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-02 00:01:27.371008 | orchestrator | 00:01:27.370 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-02 00:01:27.371014 | orchestrator | 00:01:27.370 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-02 00:01:27.371066 | orchestrator | 00:01:27.370 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-02 00:01:27.371129 | orchestrator | 00:01:27.371 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.371135 | orchestrator | 00:01:27.371 STDOUT terraform:  } 2025-09-02 00:01:27.371197 | orchestrator | 00:01:27.371 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-02 00:01:27.371207 | orchestrator | 00:01:27.371 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-02 00:01:27.371245 | orchestrator | 00:01:27.371 STDOUT terraform:  + direction = "ingress" 2025-09-02 00:01:27.371265 | orchestrator | 00:01:27.371 STDOUT terraform:  + ethertype = "IPv4" 2025-09-02 00:01:27.371286 | orchestrator | 00:01:27.371 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.371349 | orchestrator | 00:01:27.371 STDOUT terraform:  + protocol = "udp" 2025-09-02 00:01:27.371358 | orchestrator | 00:01:27.371 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.371410 | orchestrator | 00:01:27.371 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-02 00:01:27.371416 | orchestrator | 00:01:27.371 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-02 00:01:27.371444 | orchestrator | 00:01:27.371 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-02 00:01:27.371511 | orchestrator | 00:01:27.371 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-02 00:01:27.371521 | orchestrator | 00:01:27.371 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.371525 | orchestrator | 00:01:27.371 STDOUT terraform:  } 2025-09-02 00:01:27.371609 | orchestrator | 00:01:27.371 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-02 00:01:27.371617 | orchestrator | 00:01:27.371 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-02 00:01:27.371684 | orchestrator | 00:01:27.371 STDOUT terraform:  + direction = "ingress" 2025-09-02 00:01:27.371799 | orchestrator | 00:01:27.371 STDOUT terraform:  + ethertype = "IPv4" 2025-09-02 00:01:27.371888 | orchestrator | 00:01:27.371 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.371936 | orchestrator | 00:01:27.371 STDOUT terraform:  + protocol = "icmp" 2025-09-02 00:01:27.371970 | orchestrator | 00:01:27.371 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.372057 | orchestrator | 00:01:27.371 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-02 00:01:27.372062 | orchestrator | 00:01:27.371 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-02 00:01:27.372099 | orchestrator | 00:01:27.371 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-02 00:01:27.372294 | orchestrator | 00:01:27.371 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-02 00:01:27.372364 | orchestrator | 00:01:27.371 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.372390 | orchestrator | 00:01:27.371 STDOUT terraform:  } 2025-09-02 00:01:27.372433 | orchestrator | 00:01:27.371 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-02 00:01:27.372465 | orchestrator | 00:01:27.371 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-02 00:01:27.372509 | orchestrator | 00:01:27.371 STDOUT terraform:  + direction = "ingress" 2025-09-02 00:01:27.372514 | orchestrator | 00:01:27.372 STDOUT terraform:  + ethertype = "IPv4" 2025-09-02 00:01:27.372517 | orchestrator | 00:01:27.372 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.372521 | orchestrator | 00:01:27.372 STDOUT terraform:  + protocol = "tcp" 2025-09-02 00:01:27.372525 | orchestrator | 00:01:27.372 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.372531 | orchestrator | 00:01:27.372 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-02 00:01:27.372535 | orchestrator | 00:01:27.372 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-02 00:01:27.372539 | orchestrator | 00:01:27.372 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-02 00:01:27.372542 | orchestrator | 00:01:27.372 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-02 00:01:27.372546 | orchestrator | 00:01:27.372 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.372550 | orchestrator | 00:01:27.372 STDOUT terraform:  } 2025-09-02 00:01:27.372554 | orchestrator | 00:01:27.372 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-02 00:01:27.372558 | orchestrator | 00:01:27.372 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-02 00:01:27.372562 | orchestrator | 00:01:27.372 STDOUT terraform:  + direction = "ingress" 2025-09-02 00:01:27.372566 | orchestrator | 00:01:27.372 STDOUT terraform:  + ethertype = "IPv4" 2025-09-02 00:01:27.372569 | orchestrator | 00:01:27.372 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.372573 | orchestrator | 00:01:27.372 STDOUT terraform:  + protocol = "udp" 2025-09-02 00:01:27.372577 | orchestrator | 00:01:27.372 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.372583 | orchestrator | 00:01:27.372 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-02 00:01:27.372586 | orchestrator | 00:01:27.372 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-02 00:01:27.372597 | orchestrator | 00:01:27.372 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-02 00:01:27.372670 | orchestrator | 00:01:27.372 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-02 00:01:27.372788 | orchestrator | 00:01:27.372 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.372909 | orchestrator | 00:01:27.372 STDOUT terraform:  } 2025-09-02 00:01:27.372981 | orchestrator | 00:01:27.372 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-02 00:01:27.373047 | orchestrator | 00:01:27.372 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-02 00:01:27.373052 | orchestrator | 00:01:27.372 STDOUT terraform:  + direction = "ingress" 2025-09-02 00:01:27.373059 | orchestrator | 00:01:27.372 STDOUT terraform:  + ethertype = "IPv4" 2025-09-02 00:01:27.373062 | orchestrator | 00:01:27.372 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.373066 | orchestrator | 00:01:27.372 STDOUT terraform:  + protocol = "icmp" 2025-09-02 00:01:27.373072 | orchestrator | 00:01:27.372 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.373076 | orchestrator | 00:01:27.372 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-02 00:01:27.373080 | orchestrator | 00:01:27.372 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-02 00:01:27.373083 | orchestrator | 00:01:27.372 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-02 00:01:27.373087 | orchestrator | 00:01:27.372 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-02 00:01:27.373091 | orchestrator | 00:01:27.373 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.373095 | orchestrator | 00:01:27.373 STDOUT terraform:  } 2025-09-02 00:01:27.373100 | orchestrator | 00:01:27.373 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-02 00:01:27.373154 | orchestrator | 00:01:27.373 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-02 00:01:27.373212 | orchestrator | 00:01:27.373 STDOUT terraform:  + description = "vrrp" 2025-09-02 00:01:27.373364 | orchestrator | 00:01:27.373 STDOUT terraform:  + direction = "ingress" 2025-09-02 00:01:27.373448 | orchestrator | 00:01:27.373 STDOUT terraform:  + ethertype = "IPv4" 2025-09-02 00:01:27.373456 | orchestrator | 00:01:27.373 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.373460 | orchestrator | 00:01:27.373 STDOUT terraform:  + protocol = "112" 2025-09-02 00:01:27.373463 | orchestrator | 00:01:27.373 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.373469 | orchestrator | 00:01:27.373 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-02 00:01:27.373473 | orchestrator | 00:01:27.373 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-02 00:01:27.373477 | orchestrator | 00:01:27.373 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-02 00:01:27.373485 | orchestrator | 00:01:27.373 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-02 00:01:27.373489 | orchestrator | 00:01:27.373 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.373492 | orchestrator | 00:01:27.373 STDOUT terraform:  } 2025-09-02 00:01:27.373498 | orchestrator | 00:01:27.373 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-02 00:01:27.373584 | orchestrator | 00:01:27.373 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-02 00:01:27.373700 | orchestrator | 00:01:27.373 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.373738 | orchestrator | 00:01:27.373 STDOUT terraform:  + description = "management security group" 2025-09-02 00:01:27.373902 | orchestrator | 00:01:27.373 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.373945 | orchestrator | 00:01:27.373 STDOUT terraform:  + name = "testbed-management" 2025-09-02 00:01:27.373952 | orchestrator | 00:01:27.373 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.373958 | orchestrator | 00:01:27.373 STDOUT terraform:  + stateful = (known after apply) 2025-09-02 00:01:27.373962 | orchestrator | 00:01:27.373 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.373966 | orchestrator | 00:01:27.373 STDOUT terraform:  } 2025-09-02 00:01:27.373970 | orchestrator | 00:01:27.373 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-02 00:01:27.373974 | orchestrator | 00:01:27.373 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-02 00:01:27.373980 | orchestrator | 00:01:27.373 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.373984 | orchestrator | 00:01:27.373 STDOUT terraform:  + description = "node security group" 2025-09-02 00:01:27.373988 | orchestrator | 00:01:27.373 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.373992 | orchestrator | 00:01:27.373 STDOUT terraform:  + name = "testbed-node" 2025-09-02 00:01:27.373996 | orchestrator | 00:01:27.373 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.373999 | orchestrator | 00:01:27.373 STDOUT terraform:  + stateful = (known after apply) 2025-09-02 00:01:27.374003 | orchestrator | 00:01:27.373 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.374007 | orchestrator | 00:01:27.373 STDOUT terraform:  } 2025-09-02 00:01:27.374050 | orchestrator | 00:01:27.373 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-02 00:01:27.374113 | orchestrator | 00:01:27.373 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-02 00:01:27.374172 | orchestrator | 00:01:27.374 STDOUT terraform:  + all_tags = (known after apply) 2025-09-02 00:01:27.374177 | orchestrator | 00:01:27.374 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-02 00:01:27.374189 | orchestrator | 00:01:27.374 STDOUT terraform:  + dns_nameservers = [ 2025-09-02 00:01:27.374239 | orchestrator | 00:01:27.374 STDOUT terraform:  + "8.8.8.8", 2025-09-02 00:01:27.374295 | orchestrator | 00:01:27.374 STDOUT terraform:  + "9.9.9.9", 2025-09-02 00:01:27.374414 | orchestrator | 00:01:27.374 STDOUT terraform:  ] 2025-09-02 00:01:27.374498 | orchestrator | 00:01:27.374 STDOUT terraform:  + enable_dhcp = true 2025-09-02 00:01:27.374562 | orchestrator | 00:01:27.374 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-02 00:01:27.374622 | orchestrator | 00:01:27.374 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.374703 | orchestrator | 00:01:27.374 STDOUT terraform:  + ip_version = 4 2025-09-02 00:01:27.374749 | orchestrator | 00:01:27.374 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-02 00:01:27.374792 | orchestrator | 00:01:27.374 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-02 00:01:27.374899 | orchestrator | 00:01:27.374 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-02 00:01:27.374945 | orchestrator | 00:01:27.374 STDOUT terraform:  + network_id = (known after apply) 2025-09-02 00:01:27.374952 | orchestrator | 00:01:27.374 STDOUT terraform:  + no_gateway = false 2025-09-02 00:01:27.374956 | orchestrator | 00:01:27.374 STDOUT terraform:  + region = (known after apply) 2025-09-02 00:01:27.374960 | orchestrator | 00:01:27.374 STDOUT terraform:  + service_types = (known after apply) 2025-09-02 00:01:27.374964 | orchestrator | 00:01:27.374 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-02 00:01:27.374968 | orchestrator | 00:01:27.374 STDOUT terraform:  + allocation_pool { 2025-09-02 00:01:27.374971 | orchestrator | 00:01:27.374 STDOUT terraform:  + end = "192.168.31.250" 2025-09-02 00:01:27.374975 | orchestrator | 00:01:27.374 STDOUT terraform:  + start = "192.168.31.200 2025-09-02 00:01:27.375358 | orchestrator | 00:01:27.375 STDOUT terraform: " 2025-09-02 00:01:27.375367 | orchestrator | 00:01:27.375 STDOUT terraform:  } 2025-09-02 00:01:27.375373 | orchestrator | 00:01:27.375 STDOUT terraform:  } 2025-09-02 00:01:27.375406 | orchestrator | 00:01:27.375 STDOUT terraform:  # terraform_data.image will be created 2025-09-02 00:01:27.375413 | orchestrator | 00:01:27.375 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-02 00:01:27.375462 | orchestrator | 00:01:27.375 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.375470 | orchestrator | 00:01:27.375 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-02 00:01:27.375476 | orchestrator | 00:01:27.375 STDOUT terraform:  + output = (known after apply) 2025-09-02 00:01:27.375484 | orchestrator | 00:01:27.375 STDOUT terraform:  } 2025-09-02 00:01:27.375558 | orchestrator | 00:01:27.375 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-02 00:01:27.375564 | orchestrator | 00:01:27.375 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-02 00:01:27.375570 | orchestrator | 00:01:27.375 STDOUT terraform:  + id = (known after apply) 2025-09-02 00:01:27.375627 | orchestrator | 00:01:27.375 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-02 00:01:27.375633 | orchestrator | 00:01:27.375 STDOUT terraform:  + output = (known after apply) 2025-09-02 00:01:27.375637 | orchestrator | 00:01:27.375 STDOUT terraform:  } 2025-09-02 00:01:27.375642 | orchestrator | 00:01:27.375 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-02 00:01:27.375654 | orchestrator | 00:01:27.375 STDOUT terraform: Changes to Outputs: 2025-09-02 00:01:27.375704 | orchestrator | 00:01:27.375 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-02 00:01:27.375712 | orchestrator | 00:01:27.375 STDOUT terraform:  + private_key = (sensitive value) 2025-09-02 00:01:27.571723 | orchestrator | 00:01:27.571 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-02 00:01:27.571897 | orchestrator | 00:01:27.571 STDOUT terraform: terraform_data.image: Creating... 2025-09-02 00:01:27.572116 | orchestrator | 00:01:27.571 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=98fd4d6b-89ae-1d69-5d04-587d3c891938] 2025-09-02 00:01:27.572310 | orchestrator | 00:01:27.572 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=9d75b015-04cc-e645-fc92-6d5f4bbae05e] 2025-09-02 00:01:27.580059 | orchestrator | 00:01:27.579 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-02 00:01:27.585312 | orchestrator | 00:01:27.584 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-02 00:01:27.598974 | orchestrator | 00:01:27.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-02 00:01:27.599963 | orchestrator | 00:01:27.599 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-02 00:01:27.603197 | orchestrator | 00:01:27.603 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-02 00:01:27.613304 | orchestrator | 00:01:27.612 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-02 00:01:27.613840 | orchestrator | 00:01:27.612 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-02 00:01:27.614161 | orchestrator | 00:01:27.613 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-02 00:01:27.615436 | orchestrator | 00:01:27.614 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-02 00:01:27.617904 | orchestrator | 00:01:27.617 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-02 00:01:28.105146 | orchestrator | 00:01:28.104 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-02 00:01:28.109448 | orchestrator | 00:01:28.109 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-02 00:01:28.117341 | orchestrator | 00:01:28.117 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-02 00:01:28.125047 | orchestrator | 00:01:28.124 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-02 00:01:28.184450 | orchestrator | 00:01:28.184 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-02 00:01:28.190699 | orchestrator | 00:01:28.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-02 00:01:28.601829 | orchestrator | 00:01:28.601 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=04201e34-7687-4cb1-a5d3-ff18b8555d7f] 2025-09-02 00:01:28.623269 | orchestrator | 00:01:28.623 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-02 00:01:31.234417 | orchestrator | 00:01:31.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=73f7aaa7-092a-4c1a-a663-fd98a6f92d43] 2025-09-02 00:01:31.649988 | orchestrator | 00:01:31.241 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-02 00:01:31.650094 | orchestrator | 00:01:31.259 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=533befbb-84ad-4d2f-a6fe-9bcc757d70d3] 2025-09-02 00:01:31.650102 | orchestrator | 00:01:31.275 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-02 00:01:31.650108 | orchestrator | 00:01:31.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=0c16e54e-6892-4c41-822b-0a71b602051a] 2025-09-02 00:01:31.650114 | orchestrator | 00:01:31.281 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-02 00:01:31.650119 | orchestrator | 00:01:31.285 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb] 2025-09-02 00:01:31.650123 | orchestrator | 00:01:31.288 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-02 00:01:31.650128 | orchestrator | 00:01:31.308 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=4851d1ac-b90a-4b34-9adb-d79585c21de6] 2025-09-02 00:01:31.650132 | orchestrator | 00:01:31.310 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=6860efa8-e6c6-43d7-8842-eeafa8a27f70] 2025-09-02 00:01:31.650146 | orchestrator | 00:01:31.325 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-02 00:01:31.650151 | orchestrator | 00:01:31.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-02 00:01:31.650156 | orchestrator | 00:01:31.361 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=422500f3-63b7-48d3-a02b-7c8a68fd4498] 2025-09-02 00:01:31.650160 | orchestrator | 00:01:31.370 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-02 00:01:31.650164 | orchestrator | 00:01:31.379 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=ab73bc68-b021-49f6-bbbb-bb60dd18c0cd] 2025-09-02 00:01:31.650168 | orchestrator | 00:01:31.395 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-02 00:01:31.650173 | orchestrator | 00:01:31.424 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=5a98751c-9a0d-464e-b805-2bbf5e836a0e] 2025-09-02 00:01:31.650178 | orchestrator | 00:01:31.429 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-02 00:01:31.819918 | orchestrator | 00:01:31.819 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 1s [id=e9cde109694e48bee42722476bfa34b145181a8f] 2025-09-02 00:01:31.820155 | orchestrator | 00:01:31.820 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 1s [id=b45f76d8564b15d7cff54444d90761d731b39bfc] 2025-09-02 00:01:31.974695 | orchestrator | 00:01:31.974 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=f8a402b3-ab3c-4990-8e8d-6f91d3e459f9] 2025-09-02 00:01:32.477516 | orchestrator | 00:01:32.477 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=3499324c-60d8-4fe2-b8e1-ddbaa4ce4cf1] 2025-09-02 00:01:32.484532 | orchestrator | 00:01:32.484 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-02 00:01:34.655370 | orchestrator | 00:01:34.654 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=44dae450-8362-4b96-8159-84e27a3f13ee] 2025-09-02 00:01:34.668238 | orchestrator | 00:01:34.667 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=5cc69dd7-f132-4ade-913f-1aa60f8d1fc7] 2025-09-02 00:01:34.714832 | orchestrator | 00:01:34.714 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=72be9134-82c8-4fbd-a40e-19493d1fd0d5] 2025-09-02 00:01:34.722581 | orchestrator | 00:01:34.722 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=ebaf2104-8d32-4707-a68f-9d7668415e6b] 2025-09-02 00:01:34.742854 | orchestrator | 00:01:34.742 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=bf440116-e340-4d35-9c90-505955753716] 2025-09-02 00:01:34.788118 | orchestrator | 00:01:34.787 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=59f489c4-9d81-4778-bdf1-baefbcbe9222] 2025-09-02 00:01:35.057767 | orchestrator | 00:01:35.057 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=1c4e751d-6a9b-49fb-9ab7-97bbcdab2ac1] 2025-09-02 00:01:35.064482 | orchestrator | 00:01:35.064 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-02 00:01:35.065427 | orchestrator | 00:01:35.065 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-02 00:01:35.071229 | orchestrator | 00:01:35.070 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-02 00:01:35.292718 | orchestrator | 00:01:35.292 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f13651b9-77b3-43c4-8645-2fe5171f0b39] 2025-09-02 00:01:35.301870 | orchestrator | 00:01:35.301 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-02 00:01:35.302712 | orchestrator | 00:01:35.302 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b4fc056c-f876-40f8-82e9-ce1af1472455] 2025-09-02 00:01:35.309485 | orchestrator | 00:01:35.309 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-02 00:01:35.311386 | orchestrator | 00:01:35.311 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-02 00:01:35.313855 | orchestrator | 00:01:35.313 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-02 00:01:35.314160 | orchestrator | 00:01:35.313 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-02 00:01:35.316025 | orchestrator | 00:01:35.315 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-02 00:01:35.317177 | orchestrator | 00:01:35.317 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-02 00:01:35.331456 | orchestrator | 00:01:35.331 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-02 00:01:35.333796 | orchestrator | 00:01:35.333 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-02 00:01:35.460738 | orchestrator | 00:01:35.460 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=9251563b-0e98-4c23-8110-b8e8bf8c5478] 2025-09-02 00:01:35.466673 | orchestrator | 00:01:35.466 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-02 00:01:35.632587 | orchestrator | 00:01:35.632 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=c039128f-5af7-41e4-9ecf-4e3e4b205559] 2025-09-02 00:01:35.647707 | orchestrator | 00:01:35.647 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-02 00:01:35.708097 | orchestrator | 00:01:35.707 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=ec3f65f4-0d18-4103-bb30-6b37c7652156] 2025-09-02 00:01:35.720927 | orchestrator | 00:01:35.720 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-02 00:01:35.833640 | orchestrator | 00:01:35.833 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=3df95051-f810-4b9d-96ec-05b3724f4946] 2025-09-02 00:01:35.841739 | orchestrator | 00:01:35.841 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-02 00:01:35.878216 | orchestrator | 00:01:35.877 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=271a900b-9e67-404e-afb8-6e9d42777fa1] 2025-09-02 00:01:35.893652 | orchestrator | 00:01:35.893 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-02 00:01:36.003770 | orchestrator | 00:01:36.003 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=f8473465-2807-43ad-a6b3-57ec0486f736] 2025-09-02 00:01:36.015595 | orchestrator | 00:01:36.015 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-02 00:01:36.040713 | orchestrator | 00:01:36.040 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=3d7eb537-dd0b-41d8-a719-5b83773893b6] 2025-09-02 00:01:36.053223 | orchestrator | 00:01:36.053 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-02 00:01:36.189540 | orchestrator | 00:01:36.189 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=938bdd90-2262-4f0e-8e59-26fe0c610350] 2025-09-02 00:01:36.291028 | orchestrator | 00:01:36.290 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=84d19491-cc63-4f04-8c94-893f474a3951] 2025-09-02 00:01:36.325095 | orchestrator | 00:01:36.324 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=5c86c7dd-e297-46d8-96d1-012aa46c8532] 2025-09-02 00:01:36.368171 | orchestrator | 00:01:36.367 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=a0e23f51-c980-4932-99a1-af589a8ff487] 2025-09-02 00:01:36.553101 | orchestrator | 00:01:36.552 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=4b831de3-e189-4f08-b479-8a4e7aee17ba] 2025-09-02 00:01:36.629259 | orchestrator | 00:01:36.628 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ccecfe80-df93-4849-9d04-f8a095c13abf] 2025-09-02 00:01:36.765897 | orchestrator | 00:01:36.765 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=c323becb-e653-4fe3-8e8e-5268518ad6f8] 2025-09-02 00:01:36.812352 | orchestrator | 00:01:36.812 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=9dace14b-6ef8-4dc4-a39f-d4e37a96e4a8] 2025-09-02 00:01:36.833398 | orchestrator | 00:01:36.833 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=d6321341-0d97-4993-a898-37069b63a847] 2025-09-02 00:01:37.326423 | orchestrator | 00:01:37.325 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=e922f0e4-81ae-4fb0-b0b4-418641903e14] 2025-09-02 00:01:37.373837 | orchestrator | 00:01:37.373 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-02 00:01:37.374791 | orchestrator | 00:01:37.374 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-02 00:01:37.377491 | orchestrator | 00:01:37.377 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-02 00:01:37.380347 | orchestrator | 00:01:37.380 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-02 00:01:37.388104 | orchestrator | 00:01:37.387 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-02 00:01:37.400238 | orchestrator | 00:01:37.400 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-02 00:01:37.413472 | orchestrator | 00:01:37.410 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-02 00:01:39.347349 | orchestrator | 00:01:39.346 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=aa98398a-1732-4311-9bf8-ff02df0de985] 2025-09-02 00:01:39.365361 | orchestrator | 00:01:39.365 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-02 00:01:39.366581 | orchestrator | 00:01:39.366 STDOUT terraform: local_file.inventory: Creating... 2025-09-02 00:01:39.367501 | orchestrator | 00:01:39.367 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-02 00:01:39.374517 | orchestrator | 00:01:39.374 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=9a0f9d7dce50558ed1fb1421c89359a2a4621664] 2025-09-02 00:01:39.376566 | orchestrator | 00:01:39.376 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=379fd02ecf659c47ee2b660202df99e89056a7fc] 2025-09-02 00:01:40.721426 | orchestrator | 00:01:40.720 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=aa98398a-1732-4311-9bf8-ff02df0de985] 2025-09-02 00:01:47.374886 | orchestrator | 00:01:47.374 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-02 00:01:47.379021 | orchestrator | 00:01:47.378 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-02 00:01:47.392568 | orchestrator | 00:01:47.392 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-02 00:01:47.396677 | orchestrator | 00:01:47.396 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-02 00:01:47.402044 | orchestrator | 00:01:47.401 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-02 00:01:47.413362 | orchestrator | 00:01:47.413 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-02 00:01:57.376913 | orchestrator | 00:01:57.376 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-02 00:01:57.379956 | orchestrator | 00:01:57.379 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-02 00:01:57.393194 | orchestrator | 00:01:57.393 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-02 00:01:57.397424 | orchestrator | 00:01:57.397 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-02 00:01:57.402684 | orchestrator | 00:01:57.402 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-02 00:01:57.414163 | orchestrator | 00:01:57.413 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-02 00:01:57.860679 | orchestrator | 00:01:57.860 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=41ccc5fe-1aa1-4001-bbac-7b6a4550a144] 2025-09-02 00:01:58.156426 | orchestrator | 00:01:58.156 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=42df2d09-5ecf-47d0-9743-3d3e8b4a9fae] 2025-09-02 00:02:07.378612 | orchestrator | 00:02:07.378 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-02 00:02:07.380653 | orchestrator | 00:02:07.380 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-02 00:02:07.393940 | orchestrator | 00:02:07.393 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-02 00:02:07.414340 | orchestrator | 00:02:07.414 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-02 00:02:08.207925 | orchestrator | 00:02:08.207 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=153e2136-56c2-4880-b347-a7cdadd62cab] 2025-09-02 00:02:08.241235 | orchestrator | 00:02:08.240 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=7468bb1d-ef12-494c-9ef7-9add5b303f10] 2025-09-02 00:02:08.375330 | orchestrator | 00:02:08.374 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=37d53efa-cce6-4a2e-b83f-ba16c1b63b49] 2025-09-02 00:02:08.703359 | orchestrator | 00:02:08.702 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=627f0ab5-f893-49c4-b456-7b5afbafb2c5] 2025-09-02 00:02:08.723015 | orchestrator | 00:02:08.722 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-02 00:02:08.734575 | orchestrator | 00:02:08.734 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-02 00:02:08.736504 | orchestrator | 00:02:08.736 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3579890969390374833] 2025-09-02 00:02:08.740997 | orchestrator | 00:02:08.740 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-02 00:02:08.751395 | orchestrator | 00:02:08.751 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-02 00:02:08.752285 | orchestrator | 00:02:08.752 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-02 00:02:08.764165 | orchestrator | 00:02:08.764 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-02 00:02:08.767051 | orchestrator | 00:02:08.766 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-02 00:02:08.769380 | orchestrator | 00:02:08.769 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-02 00:02:08.774397 | orchestrator | 00:02:08.774 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-02 00:02:08.775600 | orchestrator | 00:02:08.775 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-02 00:02:08.783705 | orchestrator | 00:02:08.783 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-02 00:02:12.146544 | orchestrator | 00:02:12.146 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=37d53efa-cce6-4a2e-b83f-ba16c1b63b49/422500f3-63b7-48d3-a02b-7c8a68fd4498] 2025-09-02 00:02:12.176074 | orchestrator | 00:02:12.175 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=37d53efa-cce6-4a2e-b83f-ba16c1b63b49/533befbb-84ad-4d2f-a6fe-9bcc757d70d3] 2025-09-02 00:02:12.179452 | orchestrator | 00:02:12.179 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=627f0ab5-f893-49c4-b456-7b5afbafb2c5/6860efa8-e6c6-43d7-8842-eeafa8a27f70] 2025-09-02 00:02:12.215049 | orchestrator | 00:02:12.214 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=41ccc5fe-1aa1-4001-bbac-7b6a4550a144/5a98751c-9a0d-464e-b805-2bbf5e836a0e] 2025-09-02 00:02:12.218423 | orchestrator | 00:02:12.218 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=627f0ab5-f893-49c4-b456-7b5afbafb2c5/4851d1ac-b90a-4b34-9adb-d79585c21de6] 2025-09-02 00:02:18.321062 | orchestrator | 00:02:18.320 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=41ccc5fe-1aa1-4001-bbac-7b6a4550a144/0c16e54e-6892-4c41-822b-0a71b602051a] 2025-09-02 00:02:18.335461 | orchestrator | 00:02:18.335 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=37d53efa-cce6-4a2e-b83f-ba16c1b63b49/73f7aaa7-092a-4c1a-a663-fd98a6f92d43] 2025-09-02 00:02:18.349839 | orchestrator | 00:02:18.349 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=41ccc5fe-1aa1-4001-bbac-7b6a4550a144/ab73bc68-b021-49f6-bbbb-bb60dd18c0cd] 2025-09-02 00:02:18.359187 | orchestrator | 00:02:18.358 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=627f0ab5-f893-49c4-b456-7b5afbafb2c5/8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb] 2025-09-02 00:02:18.786847 | orchestrator | 00:02:18.786 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-02 00:02:28.788086 | orchestrator | 00:02:28.787 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-02 00:02:29.236181 | orchestrator | 00:02:29.235 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=341ed426-cd25-4290-ad40-5f1719a28de8] 2025-09-02 00:02:29.249108 | orchestrator | 00:02:29.248 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-02 00:02:29.249266 | orchestrator | 00:02:29.248 STDOUT terraform: Outputs: 2025-09-02 00:02:29.249322 | orchestrator | 00:02:29.248 STDOUT terraform: manager_address = 2025-09-02 00:02:29.249356 | orchestrator | 00:02:29.248 STDOUT terraform: private_key = 2025-09-02 00:02:29.710031 | orchestrator | ok: Runtime: 0:01:10.297644 2025-09-02 00:02:29.749620 | 2025-09-02 00:02:29.749771 | TASK [Create infrastructure (stable)] 2025-09-02 00:02:30.282725 | orchestrator | skipping: Conditional result was False 2025-09-02 00:02:30.299308 | 2025-09-02 00:02:30.299492 | TASK [Fetch manager address] 2025-09-02 00:02:30.736085 | orchestrator | ok 2025-09-02 00:02:30.742655 | 2025-09-02 00:02:30.742748 | TASK [Set manager_host address] 2025-09-02 00:02:30.807467 | orchestrator | ok 2025-09-02 00:02:30.818036 | 2025-09-02 00:02:30.818209 | LOOP [Update ansible collections] 2025-09-02 00:02:31.680769 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-02 00:02:31.681117 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-02 00:02:31.681188 | orchestrator | Starting galaxy collection install process 2025-09-02 00:02:31.681238 | orchestrator | Process install dependency map 2025-09-02 00:02:31.681285 | orchestrator | Starting collection install process 2025-09-02 00:02:31.681322 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-09-02 00:02:31.681362 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-09-02 00:02:31.681404 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-02 00:02:31.681479 | orchestrator | ok: Item: commons Runtime: 0:00:00.541918 2025-09-02 00:02:32.515690 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-02 00:02:32.515852 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-02 00:02:32.515915 | orchestrator | Starting galaxy collection install process 2025-09-02 00:02:32.515962 | orchestrator | Process install dependency map 2025-09-02 00:02:32.516005 | orchestrator | Starting collection install process 2025-09-02 00:02:32.516090 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-09-02 00:02:32.516140 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-09-02 00:02:32.516173 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-02 00:02:32.516224 | orchestrator | ok: Item: services Runtime: 0:00:00.587491 2025-09-02 00:02:32.537607 | 2025-09-02 00:02:32.537747 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-02 00:02:43.078604 | orchestrator | ok 2025-09-02 00:02:43.090986 | 2025-09-02 00:02:43.091137 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-02 00:03:43.139201 | orchestrator | ok 2025-09-02 00:03:43.151426 | 2025-09-02 00:03:43.151550 | TASK [Fetch manager ssh hostkey] 2025-09-02 00:03:44.722455 | orchestrator | Output suppressed because no_log was given 2025-09-02 00:03:44.738814 | 2025-09-02 00:03:44.739078 | TASK [Get ssh keypair from terraform environment] 2025-09-02 00:03:45.277072 | orchestrator | ok: Runtime: 0:00:00.008866 2025-09-02 00:03:45.293533 | 2025-09-02 00:03:45.293700 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-02 00:03:45.343251 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-02 00:03:45.353225 | 2025-09-02 00:03:45.353348 | TASK [Run manager part 0] 2025-09-02 00:03:46.168866 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-02 00:03:46.211828 | orchestrator | 2025-09-02 00:03:46.211880 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-02 00:03:46.211895 | orchestrator | 2025-09-02 00:03:46.211922 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-02 00:03:47.983449 | orchestrator | ok: [testbed-manager] 2025-09-02 00:03:47.983491 | orchestrator | 2025-09-02 00:03:47.983513 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-02 00:03:47.983522 | orchestrator | 2025-09-02 00:03:47.983531 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-02 00:03:49.876517 | orchestrator | ok: [testbed-manager] 2025-09-02 00:03:49.876589 | orchestrator | 2025-09-02 00:03:49.876640 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-02 00:03:50.564515 | orchestrator | ok: [testbed-manager] 2025-09-02 00:03:50.564566 | orchestrator | 2025-09-02 00:03:50.564577 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-02 00:03:50.613503 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:03:50.613544 | orchestrator | 2025-09-02 00:03:50.613553 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-02 00:03:50.645982 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:03:50.646038 | orchestrator | 2025-09-02 00:03:50.646047 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-02 00:03:50.672990 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:03:50.673024 | orchestrator | 2025-09-02 00:03:50.673030 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-02 00:03:50.697549 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:03:50.697581 | orchestrator | 2025-09-02 00:03:50.697586 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-02 00:03:50.722100 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:03:50.722129 | orchestrator | 2025-09-02 00:03:50.722135 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-02 00:03:50.747860 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:03:50.747889 | orchestrator | 2025-09-02 00:03:50.747898 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-02 00:03:50.772754 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:03:50.772780 | orchestrator | 2025-09-02 00:03:50.772786 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-02 00:03:51.549239 | orchestrator | changed: [testbed-manager] 2025-09-02 00:03:51.549315 | orchestrator | 2025-09-02 00:03:51.549331 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-02 00:06:30.331936 | orchestrator | changed: [testbed-manager] 2025-09-02 00:06:30.332007 | orchestrator | 2025-09-02 00:06:30.332022 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-02 00:07:57.052725 | orchestrator | changed: [testbed-manager] 2025-09-02 00:07:57.052823 | orchestrator | 2025-09-02 00:07:57.052840 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-02 00:08:18.086916 | orchestrator | changed: [testbed-manager] 2025-09-02 00:08:18.087015 | orchestrator | 2025-09-02 00:08:18.087035 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-02 00:08:27.005055 | orchestrator | changed: [testbed-manager] 2025-09-02 00:08:27.005098 | orchestrator | 2025-09-02 00:08:27.005107 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-02 00:08:27.052072 | orchestrator | ok: [testbed-manager] 2025-09-02 00:08:27.052121 | orchestrator | 2025-09-02 00:08:27.052128 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-02 00:08:27.860930 | orchestrator | ok: [testbed-manager] 2025-09-02 00:08:27.860973 | orchestrator | 2025-09-02 00:08:27.860984 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-02 00:08:28.606931 | orchestrator | changed: [testbed-manager] 2025-09-02 00:08:28.607020 | orchestrator | 2025-09-02 00:08:28.607038 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-02 00:08:35.091610 | orchestrator | changed: [testbed-manager] 2025-09-02 00:08:35.091703 | orchestrator | 2025-09-02 00:08:35.091743 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-02 00:08:41.098750 | orchestrator | changed: [testbed-manager] 2025-09-02 00:08:41.098838 | orchestrator | 2025-09-02 00:08:41.098856 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-02 00:08:43.846368 | orchestrator | changed: [testbed-manager] 2025-09-02 00:08:43.846459 | orchestrator | 2025-09-02 00:08:43.846476 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-02 00:08:45.691251 | orchestrator | changed: [testbed-manager] 2025-09-02 00:08:45.691350 | orchestrator | 2025-09-02 00:08:45.691365 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-02 00:08:47.271074 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-02 00:08:47.271152 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-02 00:08:47.271164 | orchestrator | 2025-09-02 00:08:47.271175 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-02 00:08:47.314527 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-02 00:08:47.314574 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-02 00:08:47.314581 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-02 00:08:47.314587 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-02 00:08:50.559441 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-02 00:08:50.559519 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-02 00:08:50.559531 | orchestrator | 2025-09-02 00:08:50.559541 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-02 00:08:51.129141 | orchestrator | changed: [testbed-manager] 2025-09-02 00:08:51.129212 | orchestrator | 2025-09-02 00:08:51.129226 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-02 00:09:12.014224 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-02 00:09:12.014305 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-02 00:09:12.014316 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-02 00:09:12.014324 | orchestrator | 2025-09-02 00:09:12.014332 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-02 00:09:14.354357 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-02 00:09:14.354424 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-02 00:09:14.354439 | orchestrator | 2025-09-02 00:09:14.354451 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-02 00:09:14.354464 | orchestrator | 2025-09-02 00:09:14.354475 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-02 00:09:15.778317 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:15.778406 | orchestrator | 2025-09-02 00:09:15.778425 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-02 00:09:15.822204 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:15.822239 | orchestrator | 2025-09-02 00:09:15.822245 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-02 00:09:15.881641 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:15.881676 | orchestrator | 2025-09-02 00:09:15.881681 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-02 00:09:16.615588 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:16.615675 | orchestrator | 2025-09-02 00:09:16.615693 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-02 00:09:17.397202 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:17.397305 | orchestrator | 2025-09-02 00:09:17.397321 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-02 00:09:18.787483 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-02 00:09:18.787656 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-02 00:09:18.787671 | orchestrator | 2025-09-02 00:09:18.787697 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-02 00:09:20.111032 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:20.111125 | orchestrator | 2025-09-02 00:09:20.111138 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-02 00:09:21.833943 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-02 00:09:21.834080 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-02 00:09:21.834099 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-02 00:09:21.834111 | orchestrator | 2025-09-02 00:09:21.834125 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-02 00:09:21.889739 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:09:21.889787 | orchestrator | 2025-09-02 00:09:21.889793 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-02 00:09:22.643358 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:22.643419 | orchestrator | 2025-09-02 00:09:22.643431 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-02 00:09:22.711538 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:09:22.711585 | orchestrator | 2025-09-02 00:09:22.711593 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-02 00:09:23.528719 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-02 00:09:23.528796 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:23.528813 | orchestrator | 2025-09-02 00:09:23.528825 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-02 00:09:23.566976 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:09:23.567018 | orchestrator | 2025-09-02 00:09:23.567027 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-02 00:09:23.600936 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:09:23.600977 | orchestrator | 2025-09-02 00:09:23.600986 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-02 00:09:23.633133 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:09:23.633190 | orchestrator | 2025-09-02 00:09:23.633200 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-02 00:09:23.682280 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:09:23.682340 | orchestrator | 2025-09-02 00:09:23.682356 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-02 00:09:24.385919 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:24.385954 | orchestrator | 2025-09-02 00:09:24.385960 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-02 00:09:24.385965 | orchestrator | 2025-09-02 00:09:24.385969 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-02 00:09:25.803158 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:25.803188 | orchestrator | 2025-09-02 00:09:25.803194 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-02 00:09:26.770962 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:26.771002 | orchestrator | 2025-09-02 00:09:26.771008 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:09:26.771014 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-02 00:09:26.771018 | orchestrator | 2025-09-02 00:09:27.071610 | orchestrator | ok: Runtime: 0:05:41.237517 2025-09-02 00:09:27.093508 | 2025-09-02 00:09:27.093745 | TASK [Point out that the log in on the manager is now possible] 2025-09-02 00:09:27.143710 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-02 00:09:27.153885 | 2025-09-02 00:09:27.154004 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-02 00:09:27.195864 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-02 00:09:27.206924 | 2025-09-02 00:09:27.207058 | TASK [Run manager part 1 + 2] 2025-09-02 00:09:28.087731 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-02 00:09:28.147230 | orchestrator | 2025-09-02 00:09:28.147297 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-02 00:09:28.147305 | orchestrator | 2025-09-02 00:09:28.147319 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-02 00:09:30.727990 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:30.728088 | orchestrator | 2025-09-02 00:09:30.728149 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-02 00:09:30.761733 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:09:30.761794 | orchestrator | 2025-09-02 00:09:30.761805 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-02 00:09:30.792346 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:30.792396 | orchestrator | 2025-09-02 00:09:30.792406 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-02 00:09:30.823141 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:30.823188 | orchestrator | 2025-09-02 00:09:30.823197 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-02 00:09:30.885864 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:30.885914 | orchestrator | 2025-09-02 00:09:30.885924 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-02 00:09:30.945860 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:30.945917 | orchestrator | 2025-09-02 00:09:30.945928 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-02 00:09:30.988576 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-02 00:09:30.988636 | orchestrator | 2025-09-02 00:09:30.988647 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-02 00:09:31.692691 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:31.692879 | orchestrator | 2025-09-02 00:09:31.692898 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-02 00:09:31.742944 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:09:31.743007 | orchestrator | 2025-09-02 00:09:31.743018 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-02 00:09:33.097225 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:33.097313 | orchestrator | 2025-09-02 00:09:33.097330 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-02 00:09:33.670661 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:33.670736 | orchestrator | 2025-09-02 00:09:33.670755 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-02 00:09:34.812882 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:34.812943 | orchestrator | 2025-09-02 00:09:34.812961 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-02 00:09:50.891698 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:50.891789 | orchestrator | 2025-09-02 00:09:50.891806 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-02 00:09:51.530910 | orchestrator | ok: [testbed-manager] 2025-09-02 00:09:51.530999 | orchestrator | 2025-09-02 00:09:51.531019 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-02 00:09:51.583055 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:09:51.583129 | orchestrator | 2025-09-02 00:09:51.583143 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-02 00:09:52.490486 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:52.490573 | orchestrator | 2025-09-02 00:09:52.490590 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-02 00:09:53.394296 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:53.394385 | orchestrator | 2025-09-02 00:09:53.394403 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-02 00:09:53.965314 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:53.966124 | orchestrator | 2025-09-02 00:09:53.966153 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-02 00:09:54.007630 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-02 00:09:54.007738 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-02 00:09:54.007753 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-02 00:09:54.007765 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-02 00:09:56.361516 | orchestrator | changed: [testbed-manager] 2025-09-02 00:09:56.361578 | orchestrator | 2025-09-02 00:09:56.361589 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-02 00:10:06.666466 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-02 00:10:06.666537 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-02 00:10:06.666555 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-02 00:10:06.666567 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-02 00:10:06.666587 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-02 00:10:06.666598 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-02 00:10:06.666610 | orchestrator | 2025-09-02 00:10:06.666623 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-02 00:10:07.704556 | orchestrator | changed: [testbed-manager] 2025-09-02 00:10:07.704629 | orchestrator | 2025-09-02 00:10:07.704644 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-02 00:10:07.745295 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:10:07.745362 | orchestrator | 2025-09-02 00:10:07.745377 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-02 00:10:10.935609 | orchestrator | changed: [testbed-manager] 2025-09-02 00:10:10.935647 | orchestrator | 2025-09-02 00:10:10.935655 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-02 00:10:10.976647 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:10:10.976683 | orchestrator | 2025-09-02 00:10:10.976691 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-02 00:11:58.985685 | orchestrator | changed: [testbed-manager] 2025-09-02 00:11:58.985726 | orchestrator | 2025-09-02 00:11:58.985734 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-02 00:12:00.144493 | orchestrator | ok: [testbed-manager] 2025-09-02 00:12:00.144534 | orchestrator | 2025-09-02 00:12:00.144541 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:12:00.144547 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-02 00:12:00.144552 | orchestrator | 2025-09-02 00:12:00.345631 | orchestrator | ok: Runtime: 0:02:32.711206 2025-09-02 00:12:00.361985 | 2025-09-02 00:12:00.362128 | TASK [Reboot manager] 2025-09-02 00:12:01.898663 | orchestrator | ok: Runtime: 0:00:00.982417 2025-09-02 00:12:01.913081 | 2025-09-02 00:12:01.913224 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-02 00:12:19.707330 | orchestrator | ok 2025-09-02 00:12:19.721231 | 2025-09-02 00:12:19.721407 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-02 00:13:19.767436 | orchestrator | ok 2025-09-02 00:13:19.777393 | 2025-09-02 00:13:19.777568 | TASK [Deploy manager + bootstrap nodes] 2025-09-02 00:13:22.326834 | orchestrator | 2025-09-02 00:13:22.327102 | orchestrator | # DEPLOY MANAGER 2025-09-02 00:13:22.327135 | orchestrator | 2025-09-02 00:13:22.327150 | orchestrator | + set -e 2025-09-02 00:13:22.327164 | orchestrator | + echo 2025-09-02 00:13:22.327178 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-02 00:13:22.327196 | orchestrator | + echo 2025-09-02 00:13:22.327249 | orchestrator | + cat /opt/manager-vars.sh 2025-09-02 00:13:22.330529 | orchestrator | export NUMBER_OF_NODES=6 2025-09-02 00:13:22.330561 | orchestrator | 2025-09-02 00:13:22.330574 | orchestrator | export CEPH_VERSION=reef 2025-09-02 00:13:22.330587 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-02 00:13:22.330600 | orchestrator | export MANAGER_VERSION=latest 2025-09-02 00:13:22.330623 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-02 00:13:22.330634 | orchestrator | 2025-09-02 00:13:22.330653 | orchestrator | export ARA=false 2025-09-02 00:13:22.330664 | orchestrator | export DEPLOY_MODE=manager 2025-09-02 00:13:22.330682 | orchestrator | export TEMPEST=true 2025-09-02 00:13:22.330693 | orchestrator | export IS_ZUUL=true 2025-09-02 00:13:22.330704 | orchestrator | 2025-09-02 00:13:22.330723 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.185 2025-09-02 00:13:22.330734 | orchestrator | export EXTERNAL_API=false 2025-09-02 00:13:22.330754 | orchestrator | 2025-09-02 00:13:22.330767 | orchestrator | export IMAGE_USER=ubuntu 2025-09-02 00:13:22.330781 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-02 00:13:22.330792 | orchestrator | 2025-09-02 00:13:22.330803 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-02 00:13:22.330821 | orchestrator | 2025-09-02 00:13:22.330832 | orchestrator | + echo 2025-09-02 00:13:22.330845 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-02 00:13:22.331798 | orchestrator | ++ export INTERACTIVE=false 2025-09-02 00:13:22.331833 | orchestrator | ++ INTERACTIVE=false 2025-09-02 00:13:22.331851 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-02 00:13:22.331870 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-02 00:13:22.331895 | orchestrator | + source /opt/manager-vars.sh 2025-09-02 00:13:22.331914 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-02 00:13:22.331952 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-02 00:13:22.331971 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-02 00:13:22.331990 | orchestrator | ++ CEPH_VERSION=reef 2025-09-02 00:13:22.332008 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-02 00:13:22.332027 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-02 00:13:22.332047 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-02 00:13:22.332112 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-02 00:13:22.332131 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-02 00:13:22.332179 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-02 00:13:22.332200 | orchestrator | ++ export ARA=false 2025-09-02 00:13:22.332220 | orchestrator | ++ ARA=false 2025-09-02 00:13:22.332239 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-02 00:13:22.332258 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-02 00:13:22.332282 | orchestrator | ++ export TEMPEST=true 2025-09-02 00:13:22.332305 | orchestrator | ++ TEMPEST=true 2025-09-02 00:13:22.332317 | orchestrator | ++ export IS_ZUUL=true 2025-09-02 00:13:22.332328 | orchestrator | ++ IS_ZUUL=true 2025-09-02 00:13:22.332339 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.185 2025-09-02 00:13:22.332349 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.185 2025-09-02 00:13:22.332360 | orchestrator | ++ export EXTERNAL_API=false 2025-09-02 00:13:22.332379 | orchestrator | ++ EXTERNAL_API=false 2025-09-02 00:13:22.332403 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-02 00:13:22.332421 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-02 00:13:22.332438 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-02 00:13:22.332466 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-02 00:13:22.332487 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-02 00:13:22.332504 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-02 00:13:22.332523 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-02 00:13:22.392296 | orchestrator | + docker version 2025-09-02 00:13:22.687280 | orchestrator | Client: Docker Engine - Community 2025-09-02 00:13:22.687407 | orchestrator | Version: 27.5.1 2025-09-02 00:13:22.687423 | orchestrator | API version: 1.47 2025-09-02 00:13:22.687438 | orchestrator | Go version: go1.22.11 2025-09-02 00:13:22.687450 | orchestrator | Git commit: 9f9e405 2025-09-02 00:13:22.687461 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-02 00:13:22.687474 | orchestrator | OS/Arch: linux/amd64 2025-09-02 00:13:22.687485 | orchestrator | Context: default 2025-09-02 00:13:22.687496 | orchestrator | 2025-09-02 00:13:22.687508 | orchestrator | Server: Docker Engine - Community 2025-09-02 00:13:22.687519 | orchestrator | Engine: 2025-09-02 00:13:22.687531 | orchestrator | Version: 27.5.1 2025-09-02 00:13:22.687543 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-02 00:13:22.687586 | orchestrator | Go version: go1.22.11 2025-09-02 00:13:22.687598 | orchestrator | Git commit: 4c9b3b0 2025-09-02 00:13:22.687609 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-02 00:13:22.687620 | orchestrator | OS/Arch: linux/amd64 2025-09-02 00:13:22.687631 | orchestrator | Experimental: false 2025-09-02 00:13:22.687642 | orchestrator | containerd: 2025-09-02 00:13:22.687653 | orchestrator | Version: 1.7.27 2025-09-02 00:13:22.687665 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-02 00:13:22.687676 | orchestrator | runc: 2025-09-02 00:13:22.687688 | orchestrator | Version: 1.2.5 2025-09-02 00:13:22.687699 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-02 00:13:22.687711 | orchestrator | docker-init: 2025-09-02 00:13:22.687736 | orchestrator | Version: 0.19.0 2025-09-02 00:13:22.687749 | orchestrator | GitCommit: de40ad0 2025-09-02 00:13:22.690243 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-02 00:13:22.699922 | orchestrator | + set -e 2025-09-02 00:13:22.699970 | orchestrator | + source /opt/manager-vars.sh 2025-09-02 00:13:22.699983 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-02 00:13:22.700003 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-02 00:13:22.700014 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-02 00:13:22.700025 | orchestrator | ++ CEPH_VERSION=reef 2025-09-02 00:13:22.700043 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-02 00:13:22.700077 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-02 00:13:22.700088 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-02 00:13:22.700105 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-02 00:13:22.700116 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-02 00:13:22.700127 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-02 00:13:22.700138 | orchestrator | ++ export ARA=false 2025-09-02 00:13:22.700155 | orchestrator | ++ ARA=false 2025-09-02 00:13:22.700167 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-02 00:13:22.700178 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-02 00:13:22.700189 | orchestrator | ++ export TEMPEST=true 2025-09-02 00:13:22.700199 | orchestrator | ++ TEMPEST=true 2025-09-02 00:13:22.700210 | orchestrator | ++ export IS_ZUUL=true 2025-09-02 00:13:22.700220 | orchestrator | ++ IS_ZUUL=true 2025-09-02 00:13:22.700238 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.185 2025-09-02 00:13:22.700249 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.185 2025-09-02 00:13:22.700259 | orchestrator | ++ export EXTERNAL_API=false 2025-09-02 00:13:22.700270 | orchestrator | ++ EXTERNAL_API=false 2025-09-02 00:13:22.700281 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-02 00:13:22.700291 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-02 00:13:22.700302 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-02 00:13:22.700312 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-02 00:13:22.700324 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-02 00:13:22.700334 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-02 00:13:22.700345 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-02 00:13:22.700360 | orchestrator | ++ export INTERACTIVE=false 2025-09-02 00:13:22.700371 | orchestrator | ++ INTERACTIVE=false 2025-09-02 00:13:22.700382 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-02 00:13:22.700398 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-02 00:13:22.700531 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-02 00:13:22.700589 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-02 00:13:22.700602 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-02 00:13:22.708536 | orchestrator | + set -e 2025-09-02 00:13:22.708572 | orchestrator | + VERSION=reef 2025-09-02 00:13:22.709686 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-02 00:13:22.715785 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-02 00:13:22.715810 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-02 00:13:22.721555 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-02 00:13:22.728892 | orchestrator | + set -e 2025-09-02 00:13:22.728929 | orchestrator | + VERSION=2024.2 2025-09-02 00:13:22.729446 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-02 00:13:22.733373 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-02 00:13:22.733403 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-02 00:13:22.739204 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-02 00:13:22.740218 | orchestrator | ++ semver latest 7.0.0 2025-09-02 00:13:22.801026 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-02 00:13:22.801133 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-02 00:13:22.801146 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-02 00:13:22.801158 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-02 00:13:22.910014 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-02 00:13:22.911483 | orchestrator | + source /opt/venv/bin/activate 2025-09-02 00:13:22.913549 | orchestrator | ++ deactivate nondestructive 2025-09-02 00:13:22.913603 | orchestrator | ++ '[' -n '' ']' 2025-09-02 00:13:22.913616 | orchestrator | ++ '[' -n '' ']' 2025-09-02 00:13:22.913645 | orchestrator | ++ hash -r 2025-09-02 00:13:22.913657 | orchestrator | ++ '[' -n '' ']' 2025-09-02 00:13:22.913678 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-02 00:13:22.913690 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-02 00:13:22.913702 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-02 00:13:22.913719 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-02 00:13:22.914011 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-02 00:13:22.914092 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-02 00:13:22.914104 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-02 00:13:22.914115 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-02 00:13:22.914127 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-02 00:13:22.914138 | orchestrator | ++ export PATH 2025-09-02 00:13:22.914149 | orchestrator | ++ '[' -n '' ']' 2025-09-02 00:13:22.914160 | orchestrator | ++ '[' -z '' ']' 2025-09-02 00:13:22.914176 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-02 00:13:22.914188 | orchestrator | ++ PS1='(venv) ' 2025-09-02 00:13:22.914198 | orchestrator | ++ export PS1 2025-09-02 00:13:22.914216 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-02 00:13:22.914227 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-02 00:13:22.914237 | orchestrator | ++ hash -r 2025-09-02 00:13:22.914272 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-02 00:13:24.227931 | orchestrator | 2025-09-02 00:13:24.228075 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-02 00:13:24.228092 | orchestrator | 2025-09-02 00:13:24.228103 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-02 00:13:24.818201 | orchestrator | ok: [testbed-manager] 2025-09-02 00:13:24.819182 | orchestrator | 2025-09-02 00:13:24.819218 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-02 00:13:25.876412 | orchestrator | changed: [testbed-manager] 2025-09-02 00:13:25.876535 | orchestrator | 2025-09-02 00:13:25.876552 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-02 00:13:25.876565 | orchestrator | 2025-09-02 00:13:25.876577 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-02 00:13:29.266999 | orchestrator | ok: [testbed-manager] 2025-09-02 00:13:29.267161 | orchestrator | 2025-09-02 00:13:29.267180 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-02 00:13:29.315680 | orchestrator | ok: [testbed-manager] 2025-09-02 00:13:29.315713 | orchestrator | 2025-09-02 00:13:29.315729 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-02 00:13:29.821737 | orchestrator | changed: [testbed-manager] 2025-09-02 00:13:29.822818 | orchestrator | 2025-09-02 00:13:29.822894 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-02 00:13:29.855506 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:13:29.855541 | orchestrator | 2025-09-02 00:13:29.855554 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-02 00:13:30.208645 | orchestrator | changed: [testbed-manager] 2025-09-02 00:13:30.208774 | orchestrator | 2025-09-02 00:13:30.208800 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-02 00:13:30.264027 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:13:30.264186 | orchestrator | 2025-09-02 00:13:30.264204 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-02 00:13:30.619995 | orchestrator | ok: [testbed-manager] 2025-09-02 00:13:30.620144 | orchestrator | 2025-09-02 00:13:30.620162 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-02 00:13:30.729386 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:13:30.729444 | orchestrator | 2025-09-02 00:13:30.729458 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-02 00:13:30.729470 | orchestrator | 2025-09-02 00:13:30.729484 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-02 00:13:32.494959 | orchestrator | ok: [testbed-manager] 2025-09-02 00:13:32.495151 | orchestrator | 2025-09-02 00:13:32.495169 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-02 00:13:32.625450 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-02 00:13:32.625533 | orchestrator | 2025-09-02 00:13:32.625548 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-02 00:13:32.678423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-02 00:13:32.678473 | orchestrator | 2025-09-02 00:13:32.678488 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-02 00:13:33.978958 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-02 00:13:33.979036 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-02 00:13:33.979097 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-02 00:13:33.979109 | orchestrator | 2025-09-02 00:13:33.979122 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-02 00:13:35.878805 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-02 00:13:35.878921 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-02 00:13:35.878940 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-02 00:13:35.878952 | orchestrator | 2025-09-02 00:13:35.878965 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-02 00:13:36.545411 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-02 00:13:36.545486 | orchestrator | changed: [testbed-manager] 2025-09-02 00:13:36.545501 | orchestrator | 2025-09-02 00:13:36.545513 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-02 00:13:37.202376 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-02 00:13:37.202446 | orchestrator | changed: [testbed-manager] 2025-09-02 00:13:37.202462 | orchestrator | 2025-09-02 00:13:37.202475 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-02 00:13:37.248754 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:13:37.248808 | orchestrator | 2025-09-02 00:13:37.248821 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-02 00:13:37.613680 | orchestrator | ok: [testbed-manager] 2025-09-02 00:13:37.613759 | orchestrator | 2025-09-02 00:13:37.613774 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-02 00:13:37.690622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-02 00:13:37.690705 | orchestrator | 2025-09-02 00:13:37.690718 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-02 00:13:39.171775 | orchestrator | changed: [testbed-manager] 2025-09-02 00:13:39.171878 | orchestrator | 2025-09-02 00:13:39.171893 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-02 00:13:40.044798 | orchestrator | changed: [testbed-manager] 2025-09-02 00:13:40.044903 | orchestrator | 2025-09-02 00:13:40.044918 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-02 00:13:51.839416 | orchestrator | changed: [testbed-manager] 2025-09-02 00:13:51.839549 | orchestrator | 2025-09-02 00:13:51.839566 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-02 00:13:51.898648 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:13:51.898738 | orchestrator | 2025-09-02 00:13:51.898752 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-02 00:13:51.898764 | orchestrator | 2025-09-02 00:13:51.898776 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-02 00:13:53.657709 | orchestrator | ok: [testbed-manager] 2025-09-02 00:13:53.657822 | orchestrator | 2025-09-02 00:13:53.657867 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-02 00:13:53.780360 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-02 00:13:53.780479 | orchestrator | 2025-09-02 00:13:53.780503 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-02 00:13:53.839295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-02 00:13:53.839398 | orchestrator | 2025-09-02 00:13:53.839414 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-02 00:13:56.495217 | orchestrator | ok: [testbed-manager] 2025-09-02 00:13:56.495332 | orchestrator | 2025-09-02 00:13:56.495349 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-02 00:13:56.557417 | orchestrator | ok: [testbed-manager] 2025-09-02 00:13:56.557479 | orchestrator | 2025-09-02 00:13:56.557494 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-02 00:13:56.699380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-02 00:13:56.699469 | orchestrator | 2025-09-02 00:13:56.699483 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-02 00:13:59.641209 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-02 00:13:59.641321 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-02 00:13:59.641334 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-02 00:13:59.641346 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-02 00:13:59.641356 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-02 00:13:59.641366 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-02 00:13:59.641376 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-02 00:13:59.641386 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-02 00:13:59.641396 | orchestrator | 2025-09-02 00:13:59.641407 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-02 00:14:00.298905 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:00.299015 | orchestrator | 2025-09-02 00:14:00.299086 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-02 00:14:00.963369 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:00.963453 | orchestrator | 2025-09-02 00:14:00.963460 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-02 00:14:01.038411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-02 00:14:01.038510 | orchestrator | 2025-09-02 00:14:01.038518 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-02 00:14:02.309657 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-02 00:14:02.309784 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-02 00:14:02.309799 | orchestrator | 2025-09-02 00:14:02.309830 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-02 00:14:02.940471 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:02.940575 | orchestrator | 2025-09-02 00:14:02.940591 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-02 00:14:03.002527 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:14:03.002614 | orchestrator | 2025-09-02 00:14:03.002629 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-02 00:14:03.077613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-02 00:14:03.077665 | orchestrator | 2025-09-02 00:14:03.077678 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-02 00:14:03.737687 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:03.737799 | orchestrator | 2025-09-02 00:14:03.737815 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-02 00:14:03.800424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-02 00:14:03.800579 | orchestrator | 2025-09-02 00:14:03.800597 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-02 00:14:05.208913 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-02 00:14:05.209066 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-02 00:14:05.209082 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:05.209095 | orchestrator | 2025-09-02 00:14:05.209108 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-02 00:14:05.826556 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:05.826664 | orchestrator | 2025-09-02 00:14:05.826679 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-02 00:14:05.880227 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:14:05.880323 | orchestrator | 2025-09-02 00:14:05.880336 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-02 00:14:05.965929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-02 00:14:05.966128 | orchestrator | 2025-09-02 00:14:05.966146 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-02 00:14:06.515125 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:06.515226 | orchestrator | 2025-09-02 00:14:06.515240 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-02 00:14:06.933676 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:06.933767 | orchestrator | 2025-09-02 00:14:06.933776 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-02 00:14:08.226420 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-02 00:14:08.226539 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-02 00:14:08.226554 | orchestrator | 2025-09-02 00:14:08.226567 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-02 00:14:08.945629 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:08.945739 | orchestrator | 2025-09-02 00:14:08.945757 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-02 00:14:09.360245 | orchestrator | ok: [testbed-manager] 2025-09-02 00:14:09.360323 | orchestrator | 2025-09-02 00:14:09.360334 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-02 00:14:09.730523 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:09.730630 | orchestrator | 2025-09-02 00:14:09.730645 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-02 00:14:09.783120 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:14:09.783195 | orchestrator | 2025-09-02 00:14:09.783209 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-02 00:14:09.861627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-02 00:14:09.861710 | orchestrator | 2025-09-02 00:14:09.861724 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-02 00:14:09.908850 | orchestrator | ok: [testbed-manager] 2025-09-02 00:14:09.908935 | orchestrator | 2025-09-02 00:14:09.908949 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-02 00:14:11.953210 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-02 00:14:11.953335 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-02 00:14:11.953352 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-02 00:14:11.953364 | orchestrator | 2025-09-02 00:14:11.953377 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-02 00:14:12.693095 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:12.693187 | orchestrator | 2025-09-02 00:14:12.693200 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-02 00:14:13.399664 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:13.399793 | orchestrator | 2025-09-02 00:14:13.399823 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-02 00:14:14.135205 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:14.135317 | orchestrator | 2025-09-02 00:14:14.135333 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-02 00:14:14.218407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-02 00:14:14.218526 | orchestrator | 2025-09-02 00:14:14.218543 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-02 00:14:14.275929 | orchestrator | ok: [testbed-manager] 2025-09-02 00:14:14.276059 | orchestrator | 2025-09-02 00:14:14.276074 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-02 00:14:15.013950 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-02 00:14:15.014143 | orchestrator | 2025-09-02 00:14:15.014161 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-02 00:14:15.110768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-02 00:14:15.110871 | orchestrator | 2025-09-02 00:14:15.110888 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-02 00:14:15.829948 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:15.830164 | orchestrator | 2025-09-02 00:14:15.830183 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-02 00:14:16.423042 | orchestrator | ok: [testbed-manager] 2025-09-02 00:14:16.423121 | orchestrator | 2025-09-02 00:14:16.423128 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-02 00:14:16.469890 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:14:16.469964 | orchestrator | 2025-09-02 00:14:16.469977 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-02 00:14:16.520202 | orchestrator | ok: [testbed-manager] 2025-09-02 00:14:16.520258 | orchestrator | 2025-09-02 00:14:16.520265 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-02 00:14:17.381845 | orchestrator | changed: [testbed-manager] 2025-09-02 00:14:17.381965 | orchestrator | 2025-09-02 00:14:17.381982 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-02 00:15:55.550177 | orchestrator | changed: [testbed-manager] 2025-09-02 00:15:55.550300 | orchestrator | 2025-09-02 00:15:55.550319 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-02 00:15:56.613635 | orchestrator | ok: [testbed-manager] 2025-09-02 00:15:56.613744 | orchestrator | 2025-09-02 00:15:56.613759 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-02 00:15:56.673336 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:15:56.673369 | orchestrator | 2025-09-02 00:15:56.673390 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-02 00:15:59.188561 | orchestrator | changed: [testbed-manager] 2025-09-02 00:15:59.188676 | orchestrator | 2025-09-02 00:15:59.188693 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-02 00:15:59.244313 | orchestrator | ok: [testbed-manager] 2025-09-02 00:15:59.244380 | orchestrator | 2025-09-02 00:15:59.244395 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-02 00:15:59.244407 | orchestrator | 2025-09-02 00:15:59.244419 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-02 00:15:59.308137 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:15:59.308198 | orchestrator | 2025-09-02 00:15:59.308211 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-02 00:16:59.367190 | orchestrator | Pausing for 60 seconds 2025-09-02 00:16:59.367301 | orchestrator | changed: [testbed-manager] 2025-09-02 00:16:59.367315 | orchestrator | 2025-09-02 00:16:59.367326 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-02 00:17:03.519266 | orchestrator | changed: [testbed-manager] 2025-09-02 00:17:03.519377 | orchestrator | 2025-09-02 00:17:03.519395 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-02 00:18:05.887224 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-02 00:18:05.887339 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-02 00:18:05.887355 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-09-02 00:18:05.887394 | orchestrator | changed: [testbed-manager] 2025-09-02 00:18:05.887408 | orchestrator | 2025-09-02 00:18:05.887420 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-02 00:18:16.064398 | orchestrator | changed: [testbed-manager] 2025-09-02 00:18:16.064540 | orchestrator | 2025-09-02 00:18:16.064559 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-02 00:18:16.140500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-02 00:18:16.140545 | orchestrator | 2025-09-02 00:18:16.140559 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-02 00:18:16.140571 | orchestrator | 2025-09-02 00:18:16.140583 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-02 00:18:16.195545 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:18:16.195590 | orchestrator | 2025-09-02 00:18:16.195602 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:18:16.195615 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-02 00:18:16.195627 | orchestrator | 2025-09-02 00:18:16.311322 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-02 00:18:16.311360 | orchestrator | + deactivate 2025-09-02 00:18:16.311372 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-02 00:18:16.311411 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-02 00:18:16.311423 | orchestrator | + export PATH 2025-09-02 00:18:16.311434 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-02 00:18:16.311446 | orchestrator | + '[' -n '' ']' 2025-09-02 00:18:16.311457 | orchestrator | + hash -r 2025-09-02 00:18:16.311468 | orchestrator | + '[' -n '' ']' 2025-09-02 00:18:16.311479 | orchestrator | + unset VIRTUAL_ENV 2025-09-02 00:18:16.311490 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-02 00:18:16.311501 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-02 00:18:16.311512 | orchestrator | + unset -f deactivate 2025-09-02 00:18:16.311525 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-02 00:18:16.319666 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-02 00:18:16.319690 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-02 00:18:16.319701 | orchestrator | + local max_attempts=60 2025-09-02 00:18:16.319712 | orchestrator | + local name=ceph-ansible 2025-09-02 00:18:16.319724 | orchestrator | + local attempt_num=1 2025-09-02 00:18:16.320466 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:18:16.354343 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:18:16.354376 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-02 00:18:16.354388 | orchestrator | + local max_attempts=60 2025-09-02 00:18:16.354399 | orchestrator | + local name=kolla-ansible 2025-09-02 00:18:16.354410 | orchestrator | + local attempt_num=1 2025-09-02 00:18:16.355128 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-02 00:18:16.390203 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:18:16.390230 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-02 00:18:16.390242 | orchestrator | + local max_attempts=60 2025-09-02 00:18:16.390254 | orchestrator | + local name=osism-ansible 2025-09-02 00:18:16.390265 | orchestrator | + local attempt_num=1 2025-09-02 00:18:16.391206 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-02 00:18:16.433459 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:18:16.433484 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-02 00:18:16.433495 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-02 00:18:17.206219 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-02 00:18:17.470973 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-02 00:18:17.471068 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-09-02 00:18:17.471111 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-09-02 00:18:17.471123 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-09-02 00:18:17.471137 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-09-02 00:18:17.471158 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-09-02 00:18:17.471169 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-09-02 00:18:17.471180 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-09-02 00:18:17.471191 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-09-02 00:18:17.471202 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-09-02 00:18:17.471212 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-09-02 00:18:17.471223 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-09-02 00:18:17.471234 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-09-02 00:18:17.471245 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2025-09-02 00:18:17.471256 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-09-02 00:18:17.471266 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-09-02 00:18:17.481490 | orchestrator | ++ semver latest 7.0.0 2025-09-02 00:18:17.532319 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-02 00:18:17.532361 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-02 00:18:17.532376 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-02 00:18:17.537310 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-02 00:18:29.800574 | orchestrator | 2025-09-02 00:18:29 | INFO  | Task 934d0bfa-08f5-40fe-a92e-9157a2b626e3 (resolvconf) was prepared for execution. 2025-09-02 00:18:29.800689 | orchestrator | 2025-09-02 00:18:29 | INFO  | It takes a moment until task 934d0bfa-08f5-40fe-a92e-9157a2b626e3 (resolvconf) has been started and output is visible here. 2025-09-02 00:18:45.232630 | orchestrator | 2025-09-02 00:18:45.232728 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-02 00:18:45.232744 | orchestrator | 2025-09-02 00:18:45.232832 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-02 00:18:45.232848 | orchestrator | Tuesday 02 September 2025 00:18:34 +0000 (0:00:00.153) 0:00:00.153 ***** 2025-09-02 00:18:45.232860 | orchestrator | ok: [testbed-manager] 2025-09-02 00:18:45.232873 | orchestrator | 2025-09-02 00:18:45.232884 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-02 00:18:45.232895 | orchestrator | Tuesday 02 September 2025 00:18:39 +0000 (0:00:04.987) 0:00:05.140 ***** 2025-09-02 00:18:45.232906 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:18:45.232917 | orchestrator | 2025-09-02 00:18:45.232928 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-02 00:18:45.232939 | orchestrator | Tuesday 02 September 2025 00:18:39 +0000 (0:00:00.064) 0:00:05.205 ***** 2025-09-02 00:18:45.232950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-02 00:18:45.232962 | orchestrator | 2025-09-02 00:18:45.232973 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-02 00:18:45.232984 | orchestrator | Tuesday 02 September 2025 00:18:39 +0000 (0:00:00.081) 0:00:05.287 ***** 2025-09-02 00:18:45.232994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-02 00:18:45.233005 | orchestrator | 2025-09-02 00:18:45.233016 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-02 00:18:45.233027 | orchestrator | Tuesday 02 September 2025 00:18:39 +0000 (0:00:00.087) 0:00:05.374 ***** 2025-09-02 00:18:45.233038 | orchestrator | ok: [testbed-manager] 2025-09-02 00:18:45.233048 | orchestrator | 2025-09-02 00:18:45.233059 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-02 00:18:45.233070 | orchestrator | Tuesday 02 September 2025 00:18:40 +0000 (0:00:01.122) 0:00:06.496 ***** 2025-09-02 00:18:45.233080 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:18:45.233091 | orchestrator | 2025-09-02 00:18:45.233102 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-02 00:18:45.233112 | orchestrator | Tuesday 02 September 2025 00:18:40 +0000 (0:00:00.054) 0:00:06.551 ***** 2025-09-02 00:18:45.233123 | orchestrator | ok: [testbed-manager] 2025-09-02 00:18:45.233134 | orchestrator | 2025-09-02 00:18:45.233144 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-02 00:18:45.233155 | orchestrator | Tuesday 02 September 2025 00:18:41 +0000 (0:00:00.509) 0:00:07.061 ***** 2025-09-02 00:18:45.233169 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:18:45.233181 | orchestrator | 2025-09-02 00:18:45.233194 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-02 00:18:45.233207 | orchestrator | Tuesday 02 September 2025 00:18:41 +0000 (0:00:00.076) 0:00:07.138 ***** 2025-09-02 00:18:45.233220 | orchestrator | changed: [testbed-manager] 2025-09-02 00:18:45.233233 | orchestrator | 2025-09-02 00:18:45.233246 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-02 00:18:45.233259 | orchestrator | Tuesday 02 September 2025 00:18:41 +0000 (0:00:00.553) 0:00:07.691 ***** 2025-09-02 00:18:45.233271 | orchestrator | changed: [testbed-manager] 2025-09-02 00:18:45.233283 | orchestrator | 2025-09-02 00:18:45.233295 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-02 00:18:45.233308 | orchestrator | Tuesday 02 September 2025 00:18:42 +0000 (0:00:01.088) 0:00:08.780 ***** 2025-09-02 00:18:45.233320 | orchestrator | ok: [testbed-manager] 2025-09-02 00:18:45.233333 | orchestrator | 2025-09-02 00:18:45.233345 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-02 00:18:45.233357 | orchestrator | Tuesday 02 September 2025 00:18:43 +0000 (0:00:00.989) 0:00:09.769 ***** 2025-09-02 00:18:45.233380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-02 00:18:45.233401 | orchestrator | 2025-09-02 00:18:45.233414 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-02 00:18:45.233426 | orchestrator | Tuesday 02 September 2025 00:18:43 +0000 (0:00:00.071) 0:00:09.841 ***** 2025-09-02 00:18:45.233439 | orchestrator | changed: [testbed-manager] 2025-09-02 00:18:45.233452 | orchestrator | 2025-09-02 00:18:45.233464 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:18:45.233478 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-02 00:18:45.233491 | orchestrator | 2025-09-02 00:18:45.233503 | orchestrator | 2025-09-02 00:18:45.233515 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:18:45.233528 | orchestrator | Tuesday 02 September 2025 00:18:45 +0000 (0:00:01.160) 0:00:11.001 ***** 2025-09-02 00:18:45.233541 | orchestrator | =============================================================================== 2025-09-02 00:18:45.233552 | orchestrator | Gathering Facts --------------------------------------------------------- 4.99s 2025-09-02 00:18:45.233563 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-09-02 00:18:45.233574 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.12s 2025-09-02 00:18:45.233584 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2025-09-02 00:18:45.233595 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2025-09-02 00:18:45.233606 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-09-02 00:18:45.233633 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2025-09-02 00:18:45.233645 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-09-02 00:18:45.233655 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-02 00:18:45.233666 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-02 00:18:45.233676 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-09-02 00:18:45.233687 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-02 00:18:45.233697 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-09-02 00:18:45.523856 | orchestrator | + osism apply sshconfig 2025-09-02 00:18:57.598301 | orchestrator | 2025-09-02 00:18:57 | INFO  | Task 8fcd665c-4b5c-4b18-8668-1724c23ee46b (sshconfig) was prepared for execution. 2025-09-02 00:18:57.598420 | orchestrator | 2025-09-02 00:18:57 | INFO  | It takes a moment until task 8fcd665c-4b5c-4b18-8668-1724c23ee46b (sshconfig) has been started and output is visible here. 2025-09-02 00:19:09.445466 | orchestrator | 2025-09-02 00:19:09.445578 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-02 00:19:09.445594 | orchestrator | 2025-09-02 00:19:09.445606 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-02 00:19:09.445617 | orchestrator | Tuesday 02 September 2025 00:19:01 +0000 (0:00:00.168) 0:00:00.168 ***** 2025-09-02 00:19:09.445628 | orchestrator | ok: [testbed-manager] 2025-09-02 00:19:09.445640 | orchestrator | 2025-09-02 00:19:09.445650 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-02 00:19:09.445661 | orchestrator | Tuesday 02 September 2025 00:19:02 +0000 (0:00:00.582) 0:00:00.751 ***** 2025-09-02 00:19:09.445671 | orchestrator | changed: [testbed-manager] 2025-09-02 00:19:09.445682 | orchestrator | 2025-09-02 00:19:09.445694 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-02 00:19:09.445704 | orchestrator | Tuesday 02 September 2025 00:19:02 +0000 (0:00:00.517) 0:00:01.269 ***** 2025-09-02 00:19:09.445715 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-02 00:19:09.445751 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-02 00:19:09.445763 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-02 00:19:09.445813 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-02 00:19:09.445823 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-02 00:19:09.445847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-02 00:19:09.445858 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-02 00:19:09.445867 | orchestrator | 2025-09-02 00:19:09.445877 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-02 00:19:09.445887 | orchestrator | Tuesday 02 September 2025 00:19:08 +0000 (0:00:05.844) 0:00:07.114 ***** 2025-09-02 00:19:09.445897 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:19:09.445907 | orchestrator | 2025-09-02 00:19:09.445917 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-02 00:19:09.445926 | orchestrator | Tuesday 02 September 2025 00:19:08 +0000 (0:00:00.077) 0:00:07.191 ***** 2025-09-02 00:19:09.445936 | orchestrator | changed: [testbed-manager] 2025-09-02 00:19:09.445946 | orchestrator | 2025-09-02 00:19:09.445955 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:19:09.445966 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:19:09.445977 | orchestrator | 2025-09-02 00:19:09.445987 | orchestrator | 2025-09-02 00:19:09.445997 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:19:09.446007 | orchestrator | Tuesday 02 September 2025 00:19:09 +0000 (0:00:00.610) 0:00:07.802 ***** 2025-09-02 00:19:09.446063 | orchestrator | =============================================================================== 2025-09-02 00:19:09.446075 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.85s 2025-09-02 00:19:09.446086 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-09-02 00:19:09.446097 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-09-02 00:19:09.446109 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2025-09-02 00:19:09.446120 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-09-02 00:19:09.753652 | orchestrator | + osism apply known-hosts 2025-09-02 00:19:21.856846 | orchestrator | 2025-09-02 00:19:21 | INFO  | Task a9c08af6-00f1-43ff-a7f7-425afd8985bd (known-hosts) was prepared for execution. 2025-09-02 00:19:21.856971 | orchestrator | 2025-09-02 00:19:21 | INFO  | It takes a moment until task a9c08af6-00f1-43ff-a7f7-425afd8985bd (known-hosts) has been started and output is visible here. 2025-09-02 00:19:38.832162 | orchestrator | 2025-09-02 00:19:38.832291 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-02 00:19:38.832308 | orchestrator | 2025-09-02 00:19:38.832321 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-02 00:19:38.832334 | orchestrator | Tuesday 02 September 2025 00:19:25 +0000 (0:00:00.200) 0:00:00.200 ***** 2025-09-02 00:19:38.832346 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-02 00:19:38.832358 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-02 00:19:38.832370 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-02 00:19:38.832381 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-02 00:19:38.832392 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-02 00:19:38.832403 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-02 00:19:38.832413 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-02 00:19:38.832424 | orchestrator | 2025-09-02 00:19:38.832435 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-02 00:19:38.832448 | orchestrator | Tuesday 02 September 2025 00:19:31 +0000 (0:00:06.080) 0:00:06.280 ***** 2025-09-02 00:19:38.832488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-02 00:19:38.832502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-02 00:19:38.832513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-02 00:19:38.832524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-02 00:19:38.832535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-02 00:19:38.832557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-02 00:19:38.832568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-02 00:19:38.832579 | orchestrator | 2025-09-02 00:19:38.832591 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:38.832602 | orchestrator | Tuesday 02 September 2025 00:19:32 +0000 (0:00:00.166) 0:00:06.447 ***** 2025-09-02 00:19:38.832613 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII0orQp/KJemkcVZK+OUi+KzLd93HiAc4FvOZzEf6WLk) 2025-09-02 00:19:38.832630 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5arCYh2Lp4HgHC4+S43mcOJLM+Ao3jSEC1KCZzhCb7m9nZ8Mn54NOH1T0y5qJRHs6EZ5LDr5Av6DIhARbscY6EaVf2QCG0Dqb/0TWS6IfLa28O2GylpWL4VSHmMSjogJH8rDsxgZ5gDnDz66FrKrPXc9bllZuOfnTez/GViRtpyrZn1yxJiaWvchZQcrZniWrmQcZg5iA5TaJsyuNYhNoM2BgDbMmb6F07MmmI62IRNy6CWrVIGifUr4aY3utC/MCjT6Xnn5VTpPllVpZQJSoWTQHi0W7nb3CWxT931xUM6P2oKCkptuT7/OJMaUwaTFxwO31qptNeFhUoiYSPgce4jnHJhACs0OHvcKd8dgUgeCb5jSrcIwCtAt1JS2wyfIUPFgoQyPhyN/fLuHRo7um5LkapRjg3gEt5NTLLaluMKUrtNFlVn4T0resP2z0LVl4jTlwMvUa1Npz3WAKiSoDeZH8Q6ooHUFtlepx/UsibSeM63iCLJk6g4sm0V1woTU=) 2025-09-02 00:19:38.832645 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDMsC+Pj5ss56EHleicyGZmbvLOkmJiqNkag+rEMUdO9fsl5cF07xpYMrES65fqZkGgov2gaIxHK8JSwPypbmk4=) 2025-09-02 00:19:38.832659 | orchestrator | 2025-09-02 00:19:38.832670 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:38.832683 | orchestrator | Tuesday 02 September 2025 00:19:33 +0000 (0:00:01.225) 0:00:07.672 ***** 2025-09-02 00:19:38.832695 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB3yITmJhu6EnZytIhTA5YBJE7kOYojdhvpQFuS5wgokfs1vTa1Re0IY7aQPkaI934SThh8+9KeR0XCuG8ltp9U=) 2025-09-02 00:19:38.832737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzEXr4Xd1aKmSmnnRec0izdTehaE8pZ4DTdeFpVUa7axHW81nte0J5BzWFrFb0sI4wvTnOSCcqGi42yWM8HJchH5agl148x/C04Nj2MYf350yA4oXXXemI6SPahlbrxDFjYCPrm4k+n0iCuPvNUcRDCQfvWcerIXzrsPGblZNk43q00nelgGK+/h5uDSypBw/2T73s87hxNwNNMFVFO6hDMbqUJEqZGR7qblRGM9fyr4OZlVw3zVAbOMdk9zaTiGuWyoq5ILQ1AmtUAWX3ACEaDAZcC77lvjSoSzDp4tV+KXFEaa8d0aG26ObMM/SyWaeymBWrii3empygq+HoaeyakM8TzjVn/lgac/yC4lPDxnRTinukfn5fxuNDdnfcNtf7XNDCdnTzrL0JZxDqBFskB/gdj4R5b8jEsL0k9H23xiscF05/9pFPBQf3wtdA9vNUcXHO6hzOKu6OwgID30dPGK1vmSEcPGRjKl8idAy1PVTRfIngp3ZQW+cQd7H5Scs=) 2025-09-02 00:19:38.832788 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOa+1RPqoviPd3NaXpRn+Zfn2i5qiYV+16InjrQGkVhh) 2025-09-02 00:19:38.832802 | orchestrator | 2025-09-02 00:19:38.832815 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:38.832828 | orchestrator | Tuesday 02 September 2025 00:19:34 +0000 (0:00:01.062) 0:00:08.735 ***** 2025-09-02 00:19:38.832842 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZQk8gircy6b2UKTEsTc7Y+ssC5KE3aY8F1sabWphU4iDmX0PfC0fRmNDwzLnt+VduEjd355IiTFEUmQel0a/s6yoAvbEAiIG08i6FL/q9p9TY6mlNGUDIadmxRsMi8VVQP7NwmYj6lf7PIfM2k240jYmmpnS54QJXQ62AYlebXDfwfwPlI8A59Fy/hB0On+A9dbtWq4ZkTh031duRVL9UkgkRKWU2PdMoQs5l/WP/nnlv+J/73HwSHPp0eW3Xg9lTPbOoWH0XmsNEOPMgM4WRvXmBgWOXJyQMvmLq4WehpMb+oiage8h1PiruzcvRbDr1MH38tuW4CV6cUWencnUeeHqX9DCvvVvu0QJcx1q+mVv/jZT9/oJij6SoAIE5u3C3K1qdbiGFDK7oNglvWpvCakhOyR3dUjH+jWqwYcQ8jw80o06+B5ecOc0n+xX60IRYhX/pAMs/vpz7AsReu9WAaZO7Cd5rkudR4LC3yYbRg88XMZxiumzxyIjf9dGdeuU=) 2025-09-02 00:19:38.832855 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHxkJ6vYzCJGe8UpEPZmE/OG5HN20mRGLAsExf9f8rFn2JR4AgCoMkAj1XrQ0DsGkfQobE6hBvVOCD+AbiQpfpA=) 2025-09-02 00:19:38.832868 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICLWNfPe45fEFl1wgBu/43w1wF9ba3WBxg1UKKsC2MV7) 2025-09-02 00:19:38.832880 | orchestrator | 2025-09-02 00:19:38.832892 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:38.832905 | orchestrator | Tuesday 02 September 2025 00:19:35 +0000 (0:00:01.069) 0:00:09.804 ***** 2025-09-02 00:19:38.832988 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpLsICFPISMhXOINxvF8HOnqWt0TWktxlxNIpzbvxlH3kcp7VnPu5tirAU+hSAW347UOrASRtjnnZTd8efIApnJ9LfMshZh8SV39xFJdLNKXeFbO7/kIHEqs7db8eXkBAxHmki9olKgshAMO1xeOuRpZaMBVZlAsKMiceSmGVTHlt62kFhs8fni6qZAM5dImB6GpAY1MaEFR5Q6Akp7PlGnhy9X+fbJzn8Up9HaivKucqCjONY7CICzELD21sc9xCvOAfSnw5UNgkK/6NZQqlAju0tZX/Pd9iVyNByhZZIY3OPkXyKCixbSw33Ox5iKJCxFkDBsrD1dil90OWvw1D46tfZhJJQl1tNTjPIQy2eU/T5keZ0UGjNn2ozpUKgmFh95tUdwr9aXuh+twnTC0m04xZZajFUSWZLz9DGMEVHALBzeXwfdqkbAc5QH1LZUCuC8IqRq/bvyF5WKi14hPbvMBuDjQRGUlP39J1FbI0aZLqGBGEYwO90IRARkr+wyks=) 2025-09-02 00:19:38.833002 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFmLKphTpCt22pDXqss2bgUHoZEC8DEntOKO+Rn59qZNOWPi2QWuKobZbUGX+WvuNq/WTYU/16+EYJAEpBFbXQ0=) 2025-09-02 00:19:38.833015 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIwYaIJ3q3xDK2u5QasdQVHJuDR1ULL5+mVH6lf+VDMh) 2025-09-02 00:19:38.833029 | orchestrator | 2025-09-02 00:19:38.833039 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:38.833051 | orchestrator | Tuesday 02 September 2025 00:19:36 +0000 (0:00:01.112) 0:00:10.917 ***** 2025-09-02 00:19:38.833062 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFbq378DdgUmPuRXQEIl64yrS6OcEE9grazDz48liOZuxxdR323MVl4uQ6Gd4xxwpDlaUAY1xpIVgNuxHicRtbw=) 2025-09-02 00:19:38.833073 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsvLrGZM+GFpbn5mJpwFepnQre7DykfLJls4KvADiN8C2iZC6UNnf7tKXZTymjXLD+OAR59DKVW7GCs36d7k7ZFD72cKnj+ubLqwWqvigiyrLBBjNaLdw0rk7R3Biatbu2cHQSAG7aFJ1IQdZorm7DR1uh1pVXoljr4vrQJRZZ4MwaTn3LOVYA107Gsv6SVZtG0cSEF+DwQhuPzgrQT9Zs3id1abJ0aIyhwRIZKV2QMm1CBHNtgOP6fKAe/yRJXRUjqetSFuMCxR2d/A5vFmDQhRQ/bKzRKNNpWjkvNgZx2W66q46HUgAHDJ4rA0f+hBU3lmS++Ah8DV5nSV0BHjYFrWdNmq47C+TkwbKJM9PNbNtheFwuQIHgq3lUATMZBRPTTtfo/13oVWAqubNLWLzxfSomp2imBndDtzd2oSpVgZu8OctMv97QkkOsvoN4twd1iExm94eNR+mlUbODtEa64bA+Oqf2BKoPSfEI66+wruJ1w7jcV6uUnlYmCEz2SnE=) 2025-09-02 00:19:38.833085 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKKOMIJrRz17f47IO8YqgQGIHbeeuahI85DHfm1o5IC0) 2025-09-02 00:19:38.833103 | orchestrator | 2025-09-02 00:19:38.833114 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:38.833125 | orchestrator | Tuesday 02 September 2025 00:19:37 +0000 (0:00:01.100) 0:00:12.018 ***** 2025-09-02 00:19:38.833144 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMJoEtLJ0UEZ+bbCEPhU4lSTiZPteWYku6zqXOkIMv1) 2025-09-02 00:19:49.892694 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCXH8Qez6mSZJfzLTMZfBIvOffvOAKB2/d9JpTkTEQfiY4RQfpj4jCI9e47FmPgSB8qOgN2HS8sd0YWGqpLJRs4QJ+lsZqrkA/niba8IPtWNUh2/IGrz0x9QeuIo3+sOzC8Pnp+0OXxRaRKuJBGRsOwUbd+u8qEdzt1jydqq/dywFGt9BL9pkvfcqGHqInmQllETEUAOyzk4/1LUlT6BRHm3JJyeRKWD2CL3yvzu5ihU8rvQnZXAewBKAj23ZZUgKpWp4JlWohf/w7YLUKNiHso1cuOE4uB+OJoEQupxHwLSBs7Qw5rJG//sp40piiEylGA1DaEX7meryuF6ZO78CQ4mbfskA68EOfMGr1iuZFVomzh+g/3lFswmonPVHC+LWpMYr5BlsI8jXSiAJ7mdU5oKHjYUoanfDeeOnsXlzsYd+IVihm9c0vkdeyCTOFNXXtjrldC91jBzSUix/+wp1/tS2EbdKl2y8E98MFfxFr5y15Cjwshr7PU21LX5HT6UU=) 2025-09-02 00:19:49.892869 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+ix/+au1HWGJrmytI8+2bh8HL/2UM9ls7COWY2oQ0NRDDXfdA1+SLU0FSOHkiWdLl73DnA+ljOponjzbIIIQk=) 2025-09-02 00:19:49.892888 | orchestrator | 2025-09-02 00:19:49.892902 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:49.892915 | orchestrator | Tuesday 02 September 2025 00:19:38 +0000 (0:00:01.101) 0:00:13.120 ***** 2025-09-02 00:19:49.892926 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB0XFU6HY4klWrNXosbhxGTJ8jT8B/Z9r3I0mkJbvfV+) 2025-09-02 00:19:49.892940 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDf/SX1ejPLWpgH5hC9zDafdPWJVOfIVO4ICu07taj9WsiijgG7V+d7f21Ca5NE+KmO5IO3/dku5Fdzpv07uzftPKk2AmpIngjmoxS70Vu/ZGmuDqfNcXNTLHc4TE0DuUSrJLaoHOoRqWSETfUD7KeCGYywPGup7g3eS+18gIc5kW6YvrXmB2BfKboR4QDQPl2+486T8W8QmXA+ekQtA4MxXLVyrvXd6vzcNlWF2i0zd56kJdCUwzMy4boWgc/UidDM0OYzBlAcrkFVIza7JJdOuqAMmwg/R0qDgNwkddHEnNiCsMXr3p9LZjxLyoQCZYffvK9wmi/u1nRThm4tWpSMUQ2Bf8a/IJWjCsHqriY6P20MzJHu+Gay0DSlw1xhdIWL/vNTIMKdGtBfBl8kd08QrZ6s8daPlKe/BqvBq+NG5GJ0x2MYzh5XzpdQgLtiijwBFy0D8ngkRLPqdlCptVB1XPkqlw9+v7raN5C46j7xmtzHieiGIka09q56EaOqOHM=) 2025-09-02 00:19:49.892951 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJc+eDNJ1N0Kzg73sBb4GuFJTOmFa5+Kl7M11ymRr1/51hcNGfNHxzIFOPch+yw8LXon/T/yxVI0c8fdLkVN5lo=) 2025-09-02 00:19:49.892963 | orchestrator | 2025-09-02 00:19:49.892974 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-02 00:19:49.892986 | orchestrator | Tuesday 02 September 2025 00:19:39 +0000 (0:00:01.071) 0:00:14.192 ***** 2025-09-02 00:19:49.892998 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-02 00:19:49.893009 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-02 00:19:49.893020 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-02 00:19:49.893030 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-02 00:19:49.893041 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-02 00:19:49.893051 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-02 00:19:49.893062 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-02 00:19:49.893073 | orchestrator | 2025-09-02 00:19:49.893084 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-02 00:19:49.893116 | orchestrator | Tuesday 02 September 2025 00:19:45 +0000 (0:00:05.350) 0:00:19.543 ***** 2025-09-02 00:19:49.893129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-02 00:19:49.893143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-02 00:19:49.893178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-02 00:19:49.893190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-02 00:19:49.893201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-02 00:19:49.893213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-02 00:19:49.893227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-02 00:19:49.893239 | orchestrator | 2025-09-02 00:19:49.893269 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:49.893282 | orchestrator | Tuesday 02 September 2025 00:19:45 +0000 (0:00:00.216) 0:00:19.759 ***** 2025-09-02 00:19:49.893295 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII0orQp/KJemkcVZK+OUi+KzLd93HiAc4FvOZzEf6WLk) 2025-09-02 00:19:49.893312 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5arCYh2Lp4HgHC4+S43mcOJLM+Ao3jSEC1KCZzhCb7m9nZ8Mn54NOH1T0y5qJRHs6EZ5LDr5Av6DIhARbscY6EaVf2QCG0Dqb/0TWS6IfLa28O2GylpWL4VSHmMSjogJH8rDsxgZ5gDnDz66FrKrPXc9bllZuOfnTez/GViRtpyrZn1yxJiaWvchZQcrZniWrmQcZg5iA5TaJsyuNYhNoM2BgDbMmb6F07MmmI62IRNy6CWrVIGifUr4aY3utC/MCjT6Xnn5VTpPllVpZQJSoWTQHi0W7nb3CWxT931xUM6P2oKCkptuT7/OJMaUwaTFxwO31qptNeFhUoiYSPgce4jnHJhACs0OHvcKd8dgUgeCb5jSrcIwCtAt1JS2wyfIUPFgoQyPhyN/fLuHRo7um5LkapRjg3gEt5NTLLaluMKUrtNFlVn4T0resP2z0LVl4jTlwMvUa1Npz3WAKiSoDeZH8Q6ooHUFtlepx/UsibSeM63iCLJk6g4sm0V1woTU=) 2025-09-02 00:19:49.893325 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDMsC+Pj5ss56EHleicyGZmbvLOkmJiqNkag+rEMUdO9fsl5cF07xpYMrES65fqZkGgov2gaIxHK8JSwPypbmk4=) 2025-09-02 00:19:49.893337 | orchestrator | 2025-09-02 00:19:49.893350 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:49.893364 | orchestrator | Tuesday 02 September 2025 00:19:46 +0000 (0:00:01.105) 0:00:20.865 ***** 2025-09-02 00:19:49.893376 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB3yITmJhu6EnZytIhTA5YBJE7kOYojdhvpQFuS5wgokfs1vTa1Re0IY7aQPkaI934SThh8+9KeR0XCuG8ltp9U=) 2025-09-02 00:19:49.893389 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOa+1RPqoviPd3NaXpRn+Zfn2i5qiYV+16InjrQGkVhh) 2025-09-02 00:19:49.893402 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzEXr4Xd1aKmSmnnRec0izdTehaE8pZ4DTdeFpVUa7axHW81nte0J5BzWFrFb0sI4wvTnOSCcqGi42yWM8HJchH5agl148x/C04Nj2MYf350yA4oXXXemI6SPahlbrxDFjYCPrm4k+n0iCuPvNUcRDCQfvWcerIXzrsPGblZNk43q00nelgGK+/h5uDSypBw/2T73s87hxNwNNMFVFO6hDMbqUJEqZGR7qblRGM9fyr4OZlVw3zVAbOMdk9zaTiGuWyoq5ILQ1AmtUAWX3ACEaDAZcC77lvjSoSzDp4tV+KXFEaa8d0aG26ObMM/SyWaeymBWrii3empygq+HoaeyakM8TzjVn/lgac/yC4lPDxnRTinukfn5fxuNDdnfcNtf7XNDCdnTzrL0JZxDqBFskB/gdj4R5b8jEsL0k9H23xiscF05/9pFPBQf3wtdA9vNUcXHO6hzOKu6OwgID30dPGK1vmSEcPGRjKl8idAy1PVTRfIngp3ZQW+cQd7H5Scs=) 2025-09-02 00:19:49.893415 | orchestrator | 2025-09-02 00:19:49.893428 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:49.893441 | orchestrator | Tuesday 02 September 2025 00:19:47 +0000 (0:00:01.132) 0:00:21.998 ***** 2025-09-02 00:19:49.893462 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICLWNfPe45fEFl1wgBu/43w1wF9ba3WBxg1UKKsC2MV7) 2025-09-02 00:19:49.893476 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZQk8gircy6b2UKTEsTc7Y+ssC5KE3aY8F1sabWphU4iDmX0PfC0fRmNDwzLnt+VduEjd355IiTFEUmQel0a/s6yoAvbEAiIG08i6FL/q9p9TY6mlNGUDIadmxRsMi8VVQP7NwmYj6lf7PIfM2k240jYmmpnS54QJXQ62AYlebXDfwfwPlI8A59Fy/hB0On+A9dbtWq4ZkTh031duRVL9UkgkRKWU2PdMoQs5l/WP/nnlv+J/73HwSHPp0eW3Xg9lTPbOoWH0XmsNEOPMgM4WRvXmBgWOXJyQMvmLq4WehpMb+oiage8h1PiruzcvRbDr1MH38tuW4CV6cUWencnUeeHqX9DCvvVvu0QJcx1q+mVv/jZT9/oJij6SoAIE5u3C3K1qdbiGFDK7oNglvWpvCakhOyR3dUjH+jWqwYcQ8jw80o06+B5ecOc0n+xX60IRYhX/pAMs/vpz7AsReu9WAaZO7Cd5rkudR4LC3yYbRg88XMZxiumzxyIjf9dGdeuU=) 2025-09-02 00:19:49.893489 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHxkJ6vYzCJGe8UpEPZmE/OG5HN20mRGLAsExf9f8rFn2JR4AgCoMkAj1XrQ0DsGkfQobE6hBvVOCD+AbiQpfpA=) 2025-09-02 00:19:49.893502 | orchestrator | 2025-09-02 00:19:49.893515 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:49.893527 | orchestrator | Tuesday 02 September 2025 00:19:48 +0000 (0:00:01.075) 0:00:23.073 ***** 2025-09-02 00:19:49.893551 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFmLKphTpCt22pDXqss2bgUHoZEC8DEntOKO+Rn59qZNOWPi2QWuKobZbUGX+WvuNq/WTYU/16+EYJAEpBFbXQ0=) 2025-09-02 00:19:49.893585 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpLsICFPISMhXOINxvF8HOnqWt0TWktxlxNIpzbvxlH3kcp7VnPu5tirAU+hSAW347UOrASRtjnnZTd8efIApnJ9LfMshZh8SV39xFJdLNKXeFbO7/kIHEqs7db8eXkBAxHmki9olKgshAMO1xeOuRpZaMBVZlAsKMiceSmGVTHlt62kFhs8fni6qZAM5dImB6GpAY1MaEFR5Q6Akp7PlGnhy9X+fbJzn8Up9HaivKucqCjONY7CICzELD21sc9xCvOAfSnw5UNgkK/6NZQqlAju0tZX/Pd9iVyNByhZZIY3OPkXyKCixbSw33Ox5iKJCxFkDBsrD1dil90OWvw1D46tfZhJJQl1tNTjPIQy2eU/T5keZ0UGjNn2ozpUKgmFh95tUdwr9aXuh+twnTC0m04xZZajFUSWZLz9DGMEVHALBzeXwfdqkbAc5QH1LZUCuC8IqRq/bvyF5WKi14hPbvMBuDjQRGUlP39J1FbI0aZLqGBGEYwO90IRARkr+wyks=) 2025-09-02 00:19:54.303152 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIwYaIJ3q3xDK2u5QasdQVHJuDR1ULL5+mVH6lf+VDMh) 2025-09-02 00:19:54.303268 | orchestrator | 2025-09-02 00:19:54.303293 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:54.303314 | orchestrator | Tuesday 02 September 2025 00:19:49 +0000 (0:00:01.106) 0:00:24.179 ***** 2025-09-02 00:19:54.303333 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKKOMIJrRz17f47IO8YqgQGIHbeeuahI85DHfm1o5IC0) 2025-09-02 00:19:54.303355 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsvLrGZM+GFpbn5mJpwFepnQre7DykfLJls4KvADiN8C2iZC6UNnf7tKXZTymjXLD+OAR59DKVW7GCs36d7k7ZFD72cKnj+ubLqwWqvigiyrLBBjNaLdw0rk7R3Biatbu2cHQSAG7aFJ1IQdZorm7DR1uh1pVXoljr4vrQJRZZ4MwaTn3LOVYA107Gsv6SVZtG0cSEF+DwQhuPzgrQT9Zs3id1abJ0aIyhwRIZKV2QMm1CBHNtgOP6fKAe/yRJXRUjqetSFuMCxR2d/A5vFmDQhRQ/bKzRKNNpWjkvNgZx2W66q46HUgAHDJ4rA0f+hBU3lmS++Ah8DV5nSV0BHjYFrWdNmq47C+TkwbKJM9PNbNtheFwuQIHgq3lUATMZBRPTTtfo/13oVWAqubNLWLzxfSomp2imBndDtzd2oSpVgZu8OctMv97QkkOsvoN4twd1iExm94eNR+mlUbODtEa64bA+Oqf2BKoPSfEI66+wruJ1w7jcV6uUnlYmCEz2SnE=) 2025-09-02 00:19:54.303380 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFbq378DdgUmPuRXQEIl64yrS6OcEE9grazDz48liOZuxxdR323MVl4uQ6Gd4xxwpDlaUAY1xpIVgNuxHicRtbw=) 2025-09-02 00:19:54.303407 | orchestrator | 2025-09-02 00:19:54.303433 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:54.303454 | orchestrator | Tuesday 02 September 2025 00:19:51 +0000 (0:00:01.146) 0:00:25.326 ***** 2025-09-02 00:19:54.303474 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMJoEtLJ0UEZ+bbCEPhU4lSTiZPteWYku6zqXOkIMv1) 2025-09-02 00:19:54.303527 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCXH8Qez6mSZJfzLTMZfBIvOffvOAKB2/d9JpTkTEQfiY4RQfpj4jCI9e47FmPgSB8qOgN2HS8sd0YWGqpLJRs4QJ+lsZqrkA/niba8IPtWNUh2/IGrz0x9QeuIo3+sOzC8Pnp+0OXxRaRKuJBGRsOwUbd+u8qEdzt1jydqq/dywFGt9BL9pkvfcqGHqInmQllETEUAOyzk4/1LUlT6BRHm3JJyeRKWD2CL3yvzu5ihU8rvQnZXAewBKAj23ZZUgKpWp4JlWohf/w7YLUKNiHso1cuOE4uB+OJoEQupxHwLSBs7Qw5rJG//sp40piiEylGA1DaEX7meryuF6ZO78CQ4mbfskA68EOfMGr1iuZFVomzh+g/3lFswmonPVHC+LWpMYr5BlsI8jXSiAJ7mdU5oKHjYUoanfDeeOnsXlzsYd+IVihm9c0vkdeyCTOFNXXtjrldC91jBzSUix/+wp1/tS2EbdKl2y8E98MFfxFr5y15Cjwshr7PU21LX5HT6UU=) 2025-09-02 00:19:54.303541 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI+ix/+au1HWGJrmytI8+2bh8HL/2UM9ls7COWY2oQ0NRDDXfdA1+SLU0FSOHkiWdLl73DnA+ljOponjzbIIIQk=) 2025-09-02 00:19:54.303552 | orchestrator | 2025-09-02 00:19:54.303563 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-02 00:19:54.303574 | orchestrator | Tuesday 02 September 2025 00:19:52 +0000 (0:00:01.083) 0:00:26.410 ***** 2025-09-02 00:19:54.303585 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDf/SX1ejPLWpgH5hC9zDafdPWJVOfIVO4ICu07taj9WsiijgG7V+d7f21Ca5NE+KmO5IO3/dku5Fdzpv07uzftPKk2AmpIngjmoxS70Vu/ZGmuDqfNcXNTLHc4TE0DuUSrJLaoHOoRqWSETfUD7KeCGYywPGup7g3eS+18gIc5kW6YvrXmB2BfKboR4QDQPl2+486T8W8QmXA+ekQtA4MxXLVyrvXd6vzcNlWF2i0zd56kJdCUwzMy4boWgc/UidDM0OYzBlAcrkFVIza7JJdOuqAMmwg/R0qDgNwkddHEnNiCsMXr3p9LZjxLyoQCZYffvK9wmi/u1nRThm4tWpSMUQ2Bf8a/IJWjCsHqriY6P20MzJHu+Gay0DSlw1xhdIWL/vNTIMKdGtBfBl8kd08QrZ6s8daPlKe/BqvBq+NG5GJ0x2MYzh5XzpdQgLtiijwBFy0D8ngkRLPqdlCptVB1XPkqlw9+v7raN5C46j7xmtzHieiGIka09q56EaOqOHM=) 2025-09-02 00:19:54.303597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJc+eDNJ1N0Kzg73sBb4GuFJTOmFa5+Kl7M11ymRr1/51hcNGfNHxzIFOPch+yw8LXon/T/yxVI0c8fdLkVN5lo=) 2025-09-02 00:19:54.303608 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB0XFU6HY4klWrNXosbhxGTJ8jT8B/Z9r3I0mkJbvfV+) 2025-09-02 00:19:54.303619 | orchestrator | 2025-09-02 00:19:54.303630 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-02 00:19:54.303641 | orchestrator | Tuesday 02 September 2025 00:19:53 +0000 (0:00:01.059) 0:00:27.469 ***** 2025-09-02 00:19:54.303653 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-02 00:19:54.303664 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-02 00:19:54.303677 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-02 00:19:54.303690 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-02 00:19:54.303704 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-02 00:19:54.303736 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-02 00:19:54.303791 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-02 00:19:54.303805 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:19:54.303819 | orchestrator | 2025-09-02 00:19:54.303832 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-02 00:19:54.303844 | orchestrator | Tuesday 02 September 2025 00:19:53 +0000 (0:00:00.165) 0:00:27.635 ***** 2025-09-02 00:19:54.303857 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:19:54.303869 | orchestrator | 2025-09-02 00:19:54.303882 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-02 00:19:54.303894 | orchestrator | Tuesday 02 September 2025 00:19:53 +0000 (0:00:00.090) 0:00:27.725 ***** 2025-09-02 00:19:54.303907 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:19:54.303919 | orchestrator | 2025-09-02 00:19:54.303931 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-02 00:19:54.303945 | orchestrator | Tuesday 02 September 2025 00:19:53 +0000 (0:00:00.066) 0:00:27.792 ***** 2025-09-02 00:19:54.303967 | orchestrator | changed: [testbed-manager] 2025-09-02 00:19:54.303980 | orchestrator | 2025-09-02 00:19:54.303992 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:19:54.304007 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-02 00:19:54.304021 | orchestrator | 2025-09-02 00:19:54.304033 | orchestrator | 2025-09-02 00:19:54.304063 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:19:54.304075 | orchestrator | Tuesday 02 September 2025 00:19:54 +0000 (0:00:00.541) 0:00:28.334 ***** 2025-09-02 00:19:54.304086 | orchestrator | =============================================================================== 2025-09-02 00:19:54.304097 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.08s 2025-09-02 00:19:54.304108 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.35s 2025-09-02 00:19:54.304119 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-09-02 00:19:54.304130 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-09-02 00:19:54.304141 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-02 00:19:54.304152 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-02 00:19:54.304163 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-02 00:19:54.304173 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-02 00:19:54.304184 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-02 00:19:54.304194 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-02 00:19:54.304205 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-02 00:19:54.304216 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-02 00:19:54.304227 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-02 00:19:54.304237 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-02 00:19:54.304248 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-02 00:19:54.304258 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-02 00:19:54.304269 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.54s 2025-09-02 00:19:54.304280 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.22s 2025-09-02 00:19:54.304292 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-09-02 00:19:54.304303 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-09-02 00:19:54.607121 | orchestrator | + osism apply squid 2025-09-02 00:20:06.583961 | orchestrator | 2025-09-02 00:20:06 | INFO  | Task 52c4d570-8c08-4c0e-832c-cbac4a9c1eff (squid) was prepared for execution. 2025-09-02 00:20:06.584062 | orchestrator | 2025-09-02 00:20:06 | INFO  | It takes a moment until task 52c4d570-8c08-4c0e-832c-cbac4a9c1eff (squid) has been started and output is visible here. 2025-09-02 00:22:00.839848 | orchestrator | 2025-09-02 00:22:00.839967 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-02 00:22:00.839982 | orchestrator | 2025-09-02 00:22:00.839993 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-02 00:22:00.840003 | orchestrator | Tuesday 02 September 2025 00:20:10 +0000 (0:00:00.191) 0:00:00.191 ***** 2025-09-02 00:22:00.840014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-02 00:22:00.840025 | orchestrator | 2025-09-02 00:22:00.840060 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-02 00:22:00.840071 | orchestrator | Tuesday 02 September 2025 00:20:10 +0000 (0:00:00.103) 0:00:00.294 ***** 2025-09-02 00:22:00.840081 | orchestrator | ok: [testbed-manager] 2025-09-02 00:22:00.840092 | orchestrator | 2025-09-02 00:22:00.840102 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-02 00:22:00.840112 | orchestrator | Tuesday 02 September 2025 00:20:12 +0000 (0:00:01.446) 0:00:01.740 ***** 2025-09-02 00:22:00.840122 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-02 00:22:00.840132 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-02 00:22:00.840141 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-02 00:22:00.840151 | orchestrator | 2025-09-02 00:22:00.840160 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-02 00:22:00.840169 | orchestrator | Tuesday 02 September 2025 00:20:13 +0000 (0:00:01.205) 0:00:02.946 ***** 2025-09-02 00:22:00.840179 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-02 00:22:00.840189 | orchestrator | 2025-09-02 00:22:00.840198 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-02 00:22:00.840208 | orchestrator | Tuesday 02 September 2025 00:20:14 +0000 (0:00:01.113) 0:00:04.059 ***** 2025-09-02 00:22:00.840217 | orchestrator | ok: [testbed-manager] 2025-09-02 00:22:00.840226 | orchestrator | 2025-09-02 00:22:00.840236 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-02 00:22:00.840245 | orchestrator | Tuesday 02 September 2025 00:20:14 +0000 (0:00:00.365) 0:00:04.425 ***** 2025-09-02 00:22:00.840254 | orchestrator | changed: [testbed-manager] 2025-09-02 00:22:00.840264 | orchestrator | 2025-09-02 00:22:00.840273 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-02 00:22:00.840283 | orchestrator | Tuesday 02 September 2025 00:20:15 +0000 (0:00:01.002) 0:00:05.427 ***** 2025-09-02 00:22:00.840292 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-02 00:22:00.840302 | orchestrator | ok: [testbed-manager] 2025-09-02 00:22:00.840312 | orchestrator | 2025-09-02 00:22:00.840321 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-02 00:22:00.840331 | orchestrator | Tuesday 02 September 2025 00:20:47 +0000 (0:00:31.798) 0:00:37.226 ***** 2025-09-02 00:22:00.840342 | orchestrator | changed: [testbed-manager] 2025-09-02 00:22:00.840353 | orchestrator | 2025-09-02 00:22:00.840365 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-02 00:22:00.840376 | orchestrator | Tuesday 02 September 2025 00:20:59 +0000 (0:00:12.180) 0:00:49.407 ***** 2025-09-02 00:22:00.840387 | orchestrator | Pausing for 60 seconds 2025-09-02 00:22:00.840398 | orchestrator | changed: [testbed-manager] 2025-09-02 00:22:00.840410 | orchestrator | 2025-09-02 00:22:00.840422 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-02 00:22:00.840433 | orchestrator | Tuesday 02 September 2025 00:21:59 +0000 (0:01:00.073) 0:01:49.480 ***** 2025-09-02 00:22:00.840444 | orchestrator | ok: [testbed-manager] 2025-09-02 00:22:00.840455 | orchestrator | 2025-09-02 00:22:00.840466 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-02 00:22:00.840478 | orchestrator | Tuesday 02 September 2025 00:21:59 +0000 (0:00:00.061) 0:01:49.542 ***** 2025-09-02 00:22:00.840489 | orchestrator | changed: [testbed-manager] 2025-09-02 00:22:00.840500 | orchestrator | 2025-09-02 00:22:00.840511 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:22:00.840523 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:22:00.840534 | orchestrator | 2025-09-02 00:22:00.840545 | orchestrator | 2025-09-02 00:22:00.840556 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:22:00.840568 | orchestrator | Tuesday 02 September 2025 00:22:00 +0000 (0:00:00.659) 0:01:50.202 ***** 2025-09-02 00:22:00.840588 | orchestrator | =============================================================================== 2025-09-02 00:22:00.840599 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-02 00:22:00.840610 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.80s 2025-09-02 00:22:00.840621 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.18s 2025-09-02 00:22:00.840633 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.45s 2025-09-02 00:22:00.840643 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.21s 2025-09-02 00:22:00.840655 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.11s 2025-09-02 00:22:00.840666 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.00s 2025-09-02 00:22:00.840677 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-09-02 00:22:00.840688 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-09-02 00:22:00.840720 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-09-02 00:22:00.840730 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-09-02 00:22:01.127428 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-02 00:22:01.127620 | orchestrator | ++ semver latest 9.0.0 2025-09-02 00:22:01.186769 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-02 00:22:01.186848 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-02 00:22:01.187220 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-02 00:22:13.306843 | orchestrator | 2025-09-02 00:22:13 | INFO  | Task 0bd7420d-07bc-40e1-8d17-57e2eac77344 (operator) was prepared for execution. 2025-09-02 00:22:13.306966 | orchestrator | 2025-09-02 00:22:13 | INFO  | It takes a moment until task 0bd7420d-07bc-40e1-8d17-57e2eac77344 (operator) has been started and output is visible here. 2025-09-02 00:22:29.503567 | orchestrator | 2025-09-02 00:22:29.503739 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-02 00:22:29.503759 | orchestrator | 2025-09-02 00:22:29.503771 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-02 00:22:29.503783 | orchestrator | Tuesday 02 September 2025 00:22:17 +0000 (0:00:00.158) 0:00:00.158 ***** 2025-09-02 00:22:29.503795 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:22:29.503808 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:22:29.503819 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:22:29.503830 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:22:29.503841 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:22:29.503852 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:22:29.503862 | orchestrator | 2025-09-02 00:22:29.503873 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-02 00:22:29.503885 | orchestrator | Tuesday 02 September 2025 00:22:20 +0000 (0:00:03.565) 0:00:03.724 ***** 2025-09-02 00:22:29.503896 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:22:29.503907 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:22:29.503918 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:22:29.503929 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:22:29.503940 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:22:29.503951 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:22:29.503962 | orchestrator | 2025-09-02 00:22:29.503973 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-02 00:22:29.503984 | orchestrator | 2025-09-02 00:22:29.503995 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-02 00:22:29.504006 | orchestrator | Tuesday 02 September 2025 00:22:21 +0000 (0:00:00.751) 0:00:04.476 ***** 2025-09-02 00:22:29.504017 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:22:29.504028 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:22:29.504039 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:22:29.504050 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:22:29.504061 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:22:29.504096 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:22:29.504110 | orchestrator | 2025-09-02 00:22:29.504124 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-02 00:22:29.504136 | orchestrator | Tuesday 02 September 2025 00:22:21 +0000 (0:00:00.168) 0:00:04.644 ***** 2025-09-02 00:22:29.504148 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:22:29.504161 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:22:29.504173 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:22:29.504185 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:22:29.504198 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:22:29.504210 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:22:29.504223 | orchestrator | 2025-09-02 00:22:29.504235 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-02 00:22:29.504248 | orchestrator | Tuesday 02 September 2025 00:22:22 +0000 (0:00:00.201) 0:00:04.845 ***** 2025-09-02 00:22:29.504261 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:22:29.504274 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:22:29.504286 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:22:29.504299 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:22:29.504312 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:22:29.504325 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:22:29.504338 | orchestrator | 2025-09-02 00:22:29.504351 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-02 00:22:29.504364 | orchestrator | Tuesday 02 September 2025 00:22:22 +0000 (0:00:00.665) 0:00:05.511 ***** 2025-09-02 00:22:29.504377 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:22:29.504389 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:22:29.504402 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:22:29.504415 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:22:29.504428 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:22:29.504440 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:22:29.504453 | orchestrator | 2025-09-02 00:22:29.504465 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-02 00:22:29.504477 | orchestrator | Tuesday 02 September 2025 00:22:23 +0000 (0:00:00.826) 0:00:06.337 ***** 2025-09-02 00:22:29.504488 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-02 00:22:29.504499 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-02 00:22:29.504510 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-02 00:22:29.504520 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-02 00:22:29.504532 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-02 00:22:29.504542 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-02 00:22:29.504553 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-02 00:22:29.504564 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-02 00:22:29.504575 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-02 00:22:29.504585 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-02 00:22:29.504596 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-02 00:22:29.504607 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-02 00:22:29.504618 | orchestrator | 2025-09-02 00:22:29.504629 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-02 00:22:29.504640 | orchestrator | Tuesday 02 September 2025 00:22:24 +0000 (0:00:01.112) 0:00:07.450 ***** 2025-09-02 00:22:29.504651 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:22:29.504661 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:22:29.504672 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:22:29.504701 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:22:29.504712 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:22:29.504723 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:22:29.504734 | orchestrator | 2025-09-02 00:22:29.504745 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-02 00:22:29.504756 | orchestrator | Tuesday 02 September 2025 00:22:26 +0000 (0:00:01.381) 0:00:08.831 ***** 2025-09-02 00:22:29.504776 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-02 00:22:29.504788 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-02 00:22:29.504799 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-02 00:22:29.504810 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-02 00:22:29.504840 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-02 00:22:29.504852 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-02 00:22:29.504862 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-02 00:22:29.504873 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-02 00:22:29.504884 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-02 00:22:29.504895 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-02 00:22:29.504905 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-02 00:22:29.504916 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-02 00:22:29.504927 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-02 00:22:29.504938 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-02 00:22:29.504948 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-02 00:22:29.504959 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-02 00:22:29.504970 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-02 00:22:29.504981 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-02 00:22:29.504991 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-02 00:22:29.505002 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-02 00:22:29.505013 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-02 00:22:29.505024 | orchestrator | 2025-09-02 00:22:29.505035 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-02 00:22:29.505047 | orchestrator | Tuesday 02 September 2025 00:22:27 +0000 (0:00:01.302) 0:00:10.133 ***** 2025-09-02 00:22:29.505058 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:22:29.505068 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:22:29.505079 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:22:29.505090 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:22:29.505100 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:22:29.505111 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:22:29.505121 | orchestrator | 2025-09-02 00:22:29.505132 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-02 00:22:29.505143 | orchestrator | Tuesday 02 September 2025 00:22:27 +0000 (0:00:00.158) 0:00:10.292 ***** 2025-09-02 00:22:29.505154 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:22:29.505165 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:22:29.505176 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:22:29.505186 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:22:29.505197 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:22:29.505207 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:22:29.505218 | orchestrator | 2025-09-02 00:22:29.505229 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-02 00:22:29.505239 | orchestrator | Tuesday 02 September 2025 00:22:28 +0000 (0:00:00.600) 0:00:10.892 ***** 2025-09-02 00:22:29.505250 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:22:29.505261 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:22:29.505271 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:22:29.505282 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:22:29.505293 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:22:29.505303 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:22:29.505321 | orchestrator | 2025-09-02 00:22:29.505332 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-02 00:22:29.505343 | orchestrator | Tuesday 02 September 2025 00:22:28 +0000 (0:00:00.178) 0:00:11.070 ***** 2025-09-02 00:22:29.505358 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-02 00:22:29.505370 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-02 00:22:29.505380 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:22:29.505391 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:22:29.505402 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-02 00:22:29.505412 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-02 00:22:29.505423 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:22:29.505433 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:22:29.505444 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-02 00:22:29.505455 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:22:29.505465 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-02 00:22:29.505476 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:22:29.505487 | orchestrator | 2025-09-02 00:22:29.505516 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-02 00:22:29.505528 | orchestrator | Tuesday 02 September 2025 00:22:29 +0000 (0:00:00.749) 0:00:11.820 ***** 2025-09-02 00:22:29.505539 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:22:29.505549 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:22:29.505560 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:22:29.505571 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:22:29.505581 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:22:29.505592 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:22:29.505602 | orchestrator | 2025-09-02 00:22:29.505613 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-02 00:22:29.505624 | orchestrator | Tuesday 02 September 2025 00:22:29 +0000 (0:00:00.139) 0:00:11.959 ***** 2025-09-02 00:22:29.505635 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:22:29.505645 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:22:29.505656 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:22:29.505666 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:22:29.505677 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:22:29.505706 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:22:29.505717 | orchestrator | 2025-09-02 00:22:29.505727 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-02 00:22:29.505743 | orchestrator | Tuesday 02 September 2025 00:22:29 +0000 (0:00:00.166) 0:00:12.126 ***** 2025-09-02 00:22:29.505754 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:22:29.505766 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:22:29.505776 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:22:29.505787 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:22:29.505805 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:22:30.635809 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:22:30.635921 | orchestrator | 2025-09-02 00:22:30.635936 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-02 00:22:30.635950 | orchestrator | Tuesday 02 September 2025 00:22:29 +0000 (0:00:00.171) 0:00:12.298 ***** 2025-09-02 00:22:30.635961 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:22:30.635972 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:22:30.635983 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:22:30.635994 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:22:30.636004 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:22:30.636016 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:22:30.636026 | orchestrator | 2025-09-02 00:22:30.636037 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-02 00:22:30.636048 | orchestrator | Tuesday 02 September 2025 00:22:30 +0000 (0:00:00.663) 0:00:12.961 ***** 2025-09-02 00:22:30.636059 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:22:30.636070 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:22:30.636107 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:22:30.636119 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:22:30.636130 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:22:30.636142 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:22:30.636161 | orchestrator | 2025-09-02 00:22:30.636180 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:22:30.636200 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:22:30.636220 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:22:30.636238 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:22:30.636256 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:22:30.636274 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:22:30.636291 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:22:30.636309 | orchestrator | 2025-09-02 00:22:30.636328 | orchestrator | 2025-09-02 00:22:30.636347 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:22:30.636365 | orchestrator | Tuesday 02 September 2025 00:22:30 +0000 (0:00:00.212) 0:00:13.174 ***** 2025-09-02 00:22:30.636385 | orchestrator | =============================================================================== 2025-09-02 00:22:30.636405 | orchestrator | Gathering Facts --------------------------------------------------------- 3.57s 2025-09-02 00:22:30.636425 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.38s 2025-09-02 00:22:30.636439 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2025-09-02 00:22:30.636452 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.11s 2025-09-02 00:22:30.636466 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-09-02 00:22:30.636478 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2025-09-02 00:22:30.636492 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2025-09-02 00:22:30.636504 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2025-09-02 00:22:30.636517 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2025-09-02 00:22:30.636531 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2025-09-02 00:22:30.636543 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-09-02 00:22:30.636556 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2025-09-02 00:22:30.636568 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-09-02 00:22:30.636580 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-09-02 00:22:30.636593 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-09-02 00:22:30.636606 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-09-02 00:22:30.636619 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2025-09-02 00:22:30.636632 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-09-02 00:22:30.934470 | orchestrator | + osism apply --environment custom facts 2025-09-02 00:22:32.877157 | orchestrator | 2025-09-02 00:22:32 | INFO  | Trying to run play facts in environment custom 2025-09-02 00:22:43.083665 | orchestrator | 2025-09-02 00:22:43 | INFO  | Task fe64092e-eb54-4ae5-b16b-d2388ff246cc (facts) was prepared for execution. 2025-09-02 00:22:43.083825 | orchestrator | 2025-09-02 00:22:43 | INFO  | It takes a moment until task fe64092e-eb54-4ae5-b16b-d2388ff246cc (facts) has been started and output is visible here. 2025-09-02 00:23:27.155640 | orchestrator | 2025-09-02 00:23:27.155830 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-02 00:23:27.155847 | orchestrator | 2025-09-02 00:23:27.155859 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-02 00:23:27.155872 | orchestrator | Tuesday 02 September 2025 00:22:47 +0000 (0:00:00.092) 0:00:00.092 ***** 2025-09-02 00:23:27.155883 | orchestrator | ok: [testbed-manager] 2025-09-02 00:23:27.155896 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:23:27.155908 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:23:27.155918 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:23:27.155929 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:23:27.155940 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:23:27.155951 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:23:27.155961 | orchestrator | 2025-09-02 00:23:27.155972 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-02 00:23:27.155983 | orchestrator | Tuesday 02 September 2025 00:22:48 +0000 (0:00:01.524) 0:00:01.617 ***** 2025-09-02 00:23:27.155994 | orchestrator | ok: [testbed-manager] 2025-09-02 00:23:27.156004 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:23:27.156015 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:23:27.156026 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:23:27.156036 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:23:27.156047 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:23:27.156058 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:23:27.156068 | orchestrator | 2025-09-02 00:23:27.156079 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-02 00:23:27.156090 | orchestrator | 2025-09-02 00:23:27.156101 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-02 00:23:27.156112 | orchestrator | Tuesday 02 September 2025 00:22:49 +0000 (0:00:01.201) 0:00:02.818 ***** 2025-09-02 00:23:27.156123 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:27.156134 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:27.156144 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:27.156155 | orchestrator | 2025-09-02 00:23:27.156169 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-02 00:23:27.156182 | orchestrator | Tuesday 02 September 2025 00:22:49 +0000 (0:00:00.122) 0:00:02.940 ***** 2025-09-02 00:23:27.156194 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:27.156207 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:27.156220 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:27.156232 | orchestrator | 2025-09-02 00:23:27.156245 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-02 00:23:27.156258 | orchestrator | Tuesday 02 September 2025 00:22:50 +0000 (0:00:00.208) 0:00:03.149 ***** 2025-09-02 00:23:27.156271 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:27.156283 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:27.156295 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:27.156309 | orchestrator | 2025-09-02 00:23:27.156322 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-02 00:23:27.156336 | orchestrator | Tuesday 02 September 2025 00:22:50 +0000 (0:00:00.202) 0:00:03.352 ***** 2025-09-02 00:23:27.156349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:23:27.156363 | orchestrator | 2025-09-02 00:23:27.156375 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-02 00:23:27.156388 | orchestrator | Tuesday 02 September 2025 00:22:50 +0000 (0:00:00.155) 0:00:03.508 ***** 2025-09-02 00:23:27.156429 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:27.156442 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:27.156454 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:27.156467 | orchestrator | 2025-09-02 00:23:27.156479 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-02 00:23:27.156492 | orchestrator | Tuesday 02 September 2025 00:22:50 +0000 (0:00:00.385) 0:00:03.894 ***** 2025-09-02 00:23:27.156505 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:23:27.156518 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:23:27.156528 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:23:27.156539 | orchestrator | 2025-09-02 00:23:27.156550 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-02 00:23:27.156560 | orchestrator | Tuesday 02 September 2025 00:22:51 +0000 (0:00:00.137) 0:00:04.031 ***** 2025-09-02 00:23:27.156571 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:23:27.156582 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:23:27.156592 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:23:27.156603 | orchestrator | 2025-09-02 00:23:27.156614 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-02 00:23:27.156625 | orchestrator | Tuesday 02 September 2025 00:22:52 +0000 (0:00:00.991) 0:00:05.022 ***** 2025-09-02 00:23:27.156677 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:27.156691 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:27.156701 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:27.156712 | orchestrator | 2025-09-02 00:23:27.156724 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-02 00:23:27.156734 | orchestrator | Tuesday 02 September 2025 00:22:52 +0000 (0:00:00.420) 0:00:05.442 ***** 2025-09-02 00:23:27.156745 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:23:27.156756 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:23:27.156767 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:23:27.156777 | orchestrator | 2025-09-02 00:23:27.156788 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-02 00:23:27.156799 | orchestrator | Tuesday 02 September 2025 00:22:53 +0000 (0:00:00.979) 0:00:06.422 ***** 2025-09-02 00:23:27.156809 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:23:27.156820 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:23:27.156831 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:23:27.156841 | orchestrator | 2025-09-02 00:23:27.156852 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-02 00:23:27.156863 | orchestrator | Tuesday 02 September 2025 00:23:10 +0000 (0:00:16.926) 0:00:23.349 ***** 2025-09-02 00:23:27.156873 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:23:27.156889 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:23:27.156900 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:23:27.156911 | orchestrator | 2025-09-02 00:23:27.156921 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-02 00:23:27.156951 | orchestrator | Tuesday 02 September 2025 00:23:10 +0000 (0:00:00.108) 0:00:23.457 ***** 2025-09-02 00:23:27.156963 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:23:27.156974 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:23:27.156984 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:23:27.156995 | orchestrator | 2025-09-02 00:23:27.157006 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-02 00:23:27.157016 | orchestrator | Tuesday 02 September 2025 00:23:17 +0000 (0:00:07.351) 0:00:30.809 ***** 2025-09-02 00:23:27.157027 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:27.157038 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:27.157048 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:27.157059 | orchestrator | 2025-09-02 00:23:27.157069 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-02 00:23:27.157080 | orchestrator | Tuesday 02 September 2025 00:23:18 +0000 (0:00:00.430) 0:00:31.239 ***** 2025-09-02 00:23:27.157091 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-02 00:23:27.157110 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-02 00:23:27.157121 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-02 00:23:27.157132 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-02 00:23:27.157142 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-02 00:23:27.157153 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-02 00:23:27.157164 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-02 00:23:27.157174 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-02 00:23:27.157185 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-02 00:23:27.157196 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-02 00:23:27.157206 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-02 00:23:27.157217 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-02 00:23:27.157228 | orchestrator | 2025-09-02 00:23:27.157238 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-02 00:23:27.157249 | orchestrator | Tuesday 02 September 2025 00:23:21 +0000 (0:00:03.513) 0:00:34.753 ***** 2025-09-02 00:23:27.157260 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:27.157270 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:27.157281 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:27.157292 | orchestrator | 2025-09-02 00:23:27.157303 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-02 00:23:27.157314 | orchestrator | 2025-09-02 00:23:27.157325 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-02 00:23:27.157336 | orchestrator | Tuesday 02 September 2025 00:23:23 +0000 (0:00:01.334) 0:00:36.087 ***** 2025-09-02 00:23:27.157346 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:23:27.157357 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:23:27.157368 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:23:27.157379 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:27.157389 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:27.157400 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:27.157411 | orchestrator | ok: [testbed-manager] 2025-09-02 00:23:27.157422 | orchestrator | 2025-09-02 00:23:27.157432 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:23:27.157444 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:23:27.157456 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:23:27.157469 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:23:27.157480 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:23:27.157491 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:23:27.157502 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:23:27.157513 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:23:27.157523 | orchestrator | 2025-09-02 00:23:27.157534 | orchestrator | 2025-09-02 00:23:27.157545 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:23:27.157556 | orchestrator | Tuesday 02 September 2025 00:23:27 +0000 (0:00:04.063) 0:00:40.150 ***** 2025-09-02 00:23:27.157576 | orchestrator | =============================================================================== 2025-09-02 00:23:27.157587 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.93s 2025-09-02 00:23:27.157598 | orchestrator | Install required packages (Debian) -------------------------------------- 7.35s 2025-09-02 00:23:27.157608 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.06s 2025-09-02 00:23:27.157619 | orchestrator | Copy fact files --------------------------------------------------------- 3.51s 2025-09-02 00:23:27.157635 | orchestrator | Create custom facts directory ------------------------------------------- 1.52s 2025-09-02 00:23:27.157669 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.33s 2025-09-02 00:23:27.157688 | orchestrator | Copy fact file ---------------------------------------------------------- 1.20s 2025-09-02 00:23:27.366888 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.99s 2025-09-02 00:23:27.367011 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.98s 2025-09-02 00:23:27.367036 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-09-02 00:23:27.367048 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.42s 2025-09-02 00:23:27.367059 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.39s 2025-09-02 00:23:27.367070 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-09-02 00:23:27.367081 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-09-02 00:23:27.367092 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-09-02 00:23:27.367104 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2025-09-02 00:23:27.367115 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-09-02 00:23:27.367125 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-09-02 00:23:27.695911 | orchestrator | + osism apply bootstrap 2025-09-02 00:23:39.748742 | orchestrator | 2025-09-02 00:23:39 | INFO  | Task 8b26bade-4111-4c91-82d1-79f0192ec303 (bootstrap) was prepared for execution. 2025-09-02 00:23:39.748871 | orchestrator | 2025-09-02 00:23:39 | INFO  | It takes a moment until task 8b26bade-4111-4c91-82d1-79f0192ec303 (bootstrap) has been started and output is visible here. 2025-09-02 00:23:57.092717 | orchestrator | 2025-09-02 00:23:57.092895 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-02 00:23:57.092911 | orchestrator | 2025-09-02 00:23:57.092924 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-02 00:23:57.092936 | orchestrator | Tuesday 02 September 2025 00:23:44 +0000 (0:00:00.189) 0:00:00.189 ***** 2025-09-02 00:23:57.092947 | orchestrator | ok: [testbed-manager] 2025-09-02 00:23:57.092960 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:23:57.092971 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:23:57.092981 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:23:57.092992 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:57.093003 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:57.093014 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:57.093024 | orchestrator | 2025-09-02 00:23:57.093035 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-02 00:23:57.093046 | orchestrator | 2025-09-02 00:23:57.093057 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-02 00:23:57.093068 | orchestrator | Tuesday 02 September 2025 00:23:44 +0000 (0:00:00.265) 0:00:00.455 ***** 2025-09-02 00:23:57.093079 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:23:57.093090 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:23:57.093100 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:23:57.093111 | orchestrator | ok: [testbed-manager] 2025-09-02 00:23:57.093122 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:57.093132 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:57.093166 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:57.093177 | orchestrator | 2025-09-02 00:23:57.093189 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-02 00:23:57.093199 | orchestrator | 2025-09-02 00:23:57.093210 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-02 00:23:57.093221 | orchestrator | Tuesday 02 September 2025 00:23:49 +0000 (0:00:04.670) 0:00:05.125 ***** 2025-09-02 00:23:57.093232 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-02 00:23:57.093243 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-02 00:23:57.093254 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-02 00:23:57.093265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-02 00:23:57.093275 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-02 00:23:57.093286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-02 00:23:57.093297 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-02 00:23:57.093307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-02 00:23:57.093318 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-02 00:23:57.093328 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-02 00:23:57.093340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-02 00:23:57.093350 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-02 00:23:57.093361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-02 00:23:57.093372 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-02 00:23:57.093382 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-02 00:23:57.093393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-02 00:23:57.093403 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:23:57.093414 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-02 00:23:57.093425 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-02 00:23:57.093435 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-02 00:23:57.093446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-02 00:23:57.093456 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-02 00:23:57.093468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-02 00:23:57.093478 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-02 00:23:57.093489 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:23:57.093500 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-02 00:23:57.093530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-02 00:23:57.093541 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-02 00:23:57.093552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-02 00:23:57.093562 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-02 00:23:57.093573 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-02 00:23:57.093583 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-02 00:23:57.093594 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-02 00:23:57.093604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-02 00:23:57.093615 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-02 00:23:57.093647 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-02 00:23:57.093658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:23:57.093669 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-02 00:23:57.093680 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:23:57.093690 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-02 00:23:57.093710 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-02 00:23:57.093721 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-02 00:23:57.093732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:23:57.093743 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-02 00:23:57.093754 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-02 00:23:57.093765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-02 00:23:57.093795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:23:57.093806 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:23:57.093817 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-02 00:23:57.093828 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:23:57.093839 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-02 00:23:57.093850 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-02 00:23:57.093861 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:23:57.093871 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-02 00:23:57.093882 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-02 00:23:57.093893 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:23:57.093904 | orchestrator | 2025-09-02 00:23:57.093915 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-02 00:23:57.093926 | orchestrator | 2025-09-02 00:23:57.093937 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-02 00:23:57.093948 | orchestrator | Tuesday 02 September 2025 00:23:49 +0000 (0:00:00.504) 0:00:05.630 ***** 2025-09-02 00:23:57.093959 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:23:57.093970 | orchestrator | ok: [testbed-manager] 2025-09-02 00:23:57.093981 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:57.093991 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:23:57.094002 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:57.094013 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:23:57.094086 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:57.094097 | orchestrator | 2025-09-02 00:23:57.094108 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-02 00:23:57.094119 | orchestrator | Tuesday 02 September 2025 00:23:50 +0000 (0:00:01.352) 0:00:06.982 ***** 2025-09-02 00:23:57.094130 | orchestrator | ok: [testbed-manager] 2025-09-02 00:23:57.094140 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:23:57.094151 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:23:57.094162 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:23:57.094172 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:23:57.094183 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:23:57.094194 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:23:57.094204 | orchestrator | 2025-09-02 00:23:57.094215 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-02 00:23:57.094226 | orchestrator | Tuesday 02 September 2025 00:23:52 +0000 (0:00:01.301) 0:00:08.284 ***** 2025-09-02 00:23:57.094238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:23:57.094251 | orchestrator | 2025-09-02 00:23:57.094262 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-02 00:23:57.094273 | orchestrator | Tuesday 02 September 2025 00:23:52 +0000 (0:00:00.309) 0:00:08.594 ***** 2025-09-02 00:23:57.094284 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:23:57.094295 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:23:57.094306 | orchestrator | changed: [testbed-manager] 2025-09-02 00:23:57.094317 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:23:57.094327 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:23:57.094338 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:23:57.094349 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:23:57.094367 | orchestrator | 2025-09-02 00:23:57.094378 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-02 00:23:57.094389 | orchestrator | Tuesday 02 September 2025 00:23:54 +0000 (0:00:02.094) 0:00:10.688 ***** 2025-09-02 00:23:57.094400 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:23:57.094412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:23:57.094425 | orchestrator | 2025-09-02 00:23:57.094441 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-02 00:23:57.094452 | orchestrator | Tuesday 02 September 2025 00:23:54 +0000 (0:00:00.281) 0:00:10.970 ***** 2025-09-02 00:23:57.094463 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:23:57.094474 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:23:57.094484 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:23:57.094495 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:23:57.094505 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:23:57.094516 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:23:57.094527 | orchestrator | 2025-09-02 00:23:57.094538 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-02 00:23:57.094548 | orchestrator | Tuesday 02 September 2025 00:23:55 +0000 (0:00:00.979) 0:00:11.950 ***** 2025-09-02 00:23:57.094559 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:23:57.094570 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:23:57.094580 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:23:57.094591 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:23:57.094601 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:23:57.094612 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:23:57.094639 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:23:57.094651 | orchestrator | 2025-09-02 00:23:57.094662 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-02 00:23:57.094673 | orchestrator | Tuesday 02 September 2025 00:23:56 +0000 (0:00:00.568) 0:00:12.519 ***** 2025-09-02 00:23:57.094683 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:23:57.094694 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:23:57.094705 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:23:57.094715 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:23:57.094726 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:23:57.094737 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:23:57.094748 | orchestrator | ok: [testbed-manager] 2025-09-02 00:23:57.094759 | orchestrator | 2025-09-02 00:23:57.094770 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-02 00:23:57.094782 | orchestrator | Tuesday 02 September 2025 00:23:56 +0000 (0:00:00.425) 0:00:12.944 ***** 2025-09-02 00:23:57.094793 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:23:57.094804 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:23:57.094822 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:24:09.678500 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:24:09.678692 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:24:09.678722 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:24:09.678741 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:24:09.678761 | orchestrator | 2025-09-02 00:24:09.678782 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-02 00:24:09.678804 | orchestrator | Tuesday 02 September 2025 00:23:57 +0000 (0:00:00.227) 0:00:13.172 ***** 2025-09-02 00:24:09.678826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:24:09.678849 | orchestrator | 2025-09-02 00:24:09.678868 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-02 00:24:09.678888 | orchestrator | Tuesday 02 September 2025 00:23:57 +0000 (0:00:00.326) 0:00:13.499 ***** 2025-09-02 00:24:09.678942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:24:09.678960 | orchestrator | 2025-09-02 00:24:09.678977 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-02 00:24:09.678995 | orchestrator | Tuesday 02 September 2025 00:23:57 +0000 (0:00:00.357) 0:00:13.856 ***** 2025-09-02 00:24:09.679015 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:09.679037 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.679056 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:09.679075 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.679096 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:09.679110 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.679123 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.679137 | orchestrator | 2025-09-02 00:24:09.679150 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-02 00:24:09.679164 | orchestrator | Tuesday 02 September 2025 00:23:59 +0000 (0:00:01.471) 0:00:15.328 ***** 2025-09-02 00:24:09.679177 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:24:09.679189 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:24:09.679202 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:24:09.679215 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:24:09.679227 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:24:09.679240 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:24:09.679251 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:24:09.679264 | orchestrator | 2025-09-02 00:24:09.679276 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-02 00:24:09.679289 | orchestrator | Tuesday 02 September 2025 00:23:59 +0000 (0:00:00.236) 0:00:15.564 ***** 2025-09-02 00:24:09.679302 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.679314 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:09.679326 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:09.679339 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:09.679352 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.679363 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.679373 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.679384 | orchestrator | 2025-09-02 00:24:09.679395 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-02 00:24:09.679405 | orchestrator | Tuesday 02 September 2025 00:24:00 +0000 (0:00:00.567) 0:00:16.132 ***** 2025-09-02 00:24:09.679416 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:24:09.679427 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:24:09.679438 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:24:09.679449 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:24:09.679459 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:24:09.679470 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:24:09.679481 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:24:09.679491 | orchestrator | 2025-09-02 00:24:09.679502 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-02 00:24:09.679514 | orchestrator | Tuesday 02 September 2025 00:24:00 +0000 (0:00:00.253) 0:00:16.385 ***** 2025-09-02 00:24:09.679525 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.679536 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:09.679546 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:09.679557 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:09.679567 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:24:09.679578 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:24:09.679589 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:24:09.679599 | orchestrator | 2025-09-02 00:24:09.679610 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-02 00:24:09.679646 | orchestrator | Tuesday 02 September 2025 00:24:00 +0000 (0:00:00.545) 0:00:16.930 ***** 2025-09-02 00:24:09.679667 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.679678 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:09.679689 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:09.679699 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:24:09.679710 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:09.679720 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:24:09.679731 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:24:09.679741 | orchestrator | 2025-09-02 00:24:09.679752 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-02 00:24:09.679763 | orchestrator | Tuesday 02 September 2025 00:24:02 +0000 (0:00:01.211) 0:00:18.142 ***** 2025-09-02 00:24:09.679773 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:09.679784 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.679795 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.679806 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:09.679816 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.679827 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:09.679838 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.679848 | orchestrator | 2025-09-02 00:24:09.679859 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-02 00:24:09.679870 | orchestrator | Tuesday 02 September 2025 00:24:03 +0000 (0:00:01.205) 0:00:19.348 ***** 2025-09-02 00:24:09.679903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:24:09.679915 | orchestrator | 2025-09-02 00:24:09.679926 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-02 00:24:09.679937 | orchestrator | Tuesday 02 September 2025 00:24:03 +0000 (0:00:00.462) 0:00:19.811 ***** 2025-09-02 00:24:09.679947 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:24:09.679958 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:09.679969 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:09.679979 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:24:09.679990 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:09.680001 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:24:09.680011 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:24:09.680022 | orchestrator | 2025-09-02 00:24:09.680033 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-02 00:24:09.680043 | orchestrator | Tuesday 02 September 2025 00:24:05 +0000 (0:00:01.309) 0:00:21.120 ***** 2025-09-02 00:24:09.680054 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.680064 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:09.680075 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:09.680085 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:09.680149 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.680161 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.680172 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.680183 | orchestrator | 2025-09-02 00:24:09.680194 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-02 00:24:09.680205 | orchestrator | Tuesday 02 September 2025 00:24:05 +0000 (0:00:00.239) 0:00:21.359 ***** 2025-09-02 00:24:09.680216 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.680227 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:09.680237 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:09.680248 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:09.680258 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.680269 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.680280 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.680290 | orchestrator | 2025-09-02 00:24:09.680301 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-02 00:24:09.680312 | orchestrator | Tuesday 02 September 2025 00:24:05 +0000 (0:00:00.226) 0:00:21.586 ***** 2025-09-02 00:24:09.680323 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.680342 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:09.680353 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:09.680364 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:09.680374 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.680385 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.680395 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.680406 | orchestrator | 2025-09-02 00:24:09.680417 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-02 00:24:09.680428 | orchestrator | Tuesday 02 September 2025 00:24:05 +0000 (0:00:00.211) 0:00:21.798 ***** 2025-09-02 00:24:09.680440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:24:09.680452 | orchestrator | 2025-09-02 00:24:09.680463 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-02 00:24:09.680474 | orchestrator | Tuesday 02 September 2025 00:24:06 +0000 (0:00:00.307) 0:00:22.106 ***** 2025-09-02 00:24:09.680485 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.680496 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:09.680506 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:09.680517 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:09.680527 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.680538 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.680548 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.680559 | orchestrator | 2025-09-02 00:24:09.680575 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-02 00:24:09.680586 | orchestrator | Tuesday 02 September 2025 00:24:06 +0000 (0:00:00.637) 0:00:22.743 ***** 2025-09-02 00:24:09.680596 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:24:09.680607 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:24:09.680636 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:24:09.680647 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:24:09.680657 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:24:09.680668 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:24:09.680678 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:24:09.680689 | orchestrator | 2025-09-02 00:24:09.680700 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-02 00:24:09.680710 | orchestrator | Tuesday 02 September 2025 00:24:06 +0000 (0:00:00.220) 0:00:22.964 ***** 2025-09-02 00:24:09.680721 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.680732 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.680742 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:09.680753 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:09.680763 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:09.680774 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.680784 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.680795 | orchestrator | 2025-09-02 00:24:09.680805 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-02 00:24:09.680816 | orchestrator | Tuesday 02 September 2025 00:24:08 +0000 (0:00:01.082) 0:00:24.047 ***** 2025-09-02 00:24:09.680827 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.680838 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:09.680848 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:09.680859 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:09.680869 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.680880 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.680890 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.680901 | orchestrator | 2025-09-02 00:24:09.680911 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-02 00:24:09.680922 | orchestrator | Tuesday 02 September 2025 00:24:08 +0000 (0:00:00.606) 0:00:24.654 ***** 2025-09-02 00:24:09.680933 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:09.680943 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:09.680954 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:09.680971 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:09.680989 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:50.113622 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:50.113743 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:50.113758 | orchestrator | 2025-09-02 00:24:50.113770 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-02 00:24:50.113782 | orchestrator | Tuesday 02 September 2025 00:24:09 +0000 (0:00:01.024) 0:00:25.679 ***** 2025-09-02 00:24:50.113792 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.113803 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.113812 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.113822 | orchestrator | changed: [testbed-manager] 2025-09-02 00:24:50.113832 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:50.113841 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:50.113851 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:50.113861 | orchestrator | 2025-09-02 00:24:50.113871 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-02 00:24:50.113880 | orchestrator | Tuesday 02 September 2025 00:24:26 +0000 (0:00:16.753) 0:00:42.432 ***** 2025-09-02 00:24:50.113890 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.113899 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.113909 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.113918 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.113928 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.113937 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.113947 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.113956 | orchestrator | 2025-09-02 00:24:50.113966 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-02 00:24:50.113975 | orchestrator | Tuesday 02 September 2025 00:24:26 +0000 (0:00:00.267) 0:00:42.700 ***** 2025-09-02 00:24:50.113985 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.113994 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.114004 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.114073 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.114085 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.114095 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.114106 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.114125 | orchestrator | 2025-09-02 00:24:50.114137 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-02 00:24:50.114148 | orchestrator | Tuesday 02 September 2025 00:24:26 +0000 (0:00:00.218) 0:00:42.918 ***** 2025-09-02 00:24:50.114159 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.114172 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.114183 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.114194 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.114206 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.114217 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.114229 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.114239 | orchestrator | 2025-09-02 00:24:50.114250 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-02 00:24:50.114261 | orchestrator | Tuesday 02 September 2025 00:24:27 +0000 (0:00:00.236) 0:00:43.155 ***** 2025-09-02 00:24:50.114274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:24:50.114288 | orchestrator | 2025-09-02 00:24:50.114300 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-02 00:24:50.114311 | orchestrator | Tuesday 02 September 2025 00:24:27 +0000 (0:00:00.286) 0:00:43.441 ***** 2025-09-02 00:24:50.114321 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.114333 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.114344 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.114355 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.114366 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.114377 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.114411 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.114422 | orchestrator | 2025-09-02 00:24:50.114434 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-02 00:24:50.114458 | orchestrator | Tuesday 02 September 2025 00:24:29 +0000 (0:00:01.664) 0:00:45.106 ***** 2025-09-02 00:24:50.114468 | orchestrator | changed: [testbed-manager] 2025-09-02 00:24:50.114478 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:50.114488 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:50.114497 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:50.114507 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:24:50.114516 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:24:50.114526 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:24:50.114535 | orchestrator | 2025-09-02 00:24:50.114545 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-02 00:24:50.114555 | orchestrator | Tuesday 02 September 2025 00:24:30 +0000 (0:00:01.090) 0:00:46.196 ***** 2025-09-02 00:24:50.114565 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.114595 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.114605 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.114614 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.114624 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.114633 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.114643 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.114652 | orchestrator | 2025-09-02 00:24:50.114662 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-02 00:24:50.114671 | orchestrator | Tuesday 02 September 2025 00:24:31 +0000 (0:00:00.817) 0:00:47.014 ***** 2025-09-02 00:24:50.114682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:24:50.114693 | orchestrator | 2025-09-02 00:24:50.114703 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-02 00:24:50.114713 | orchestrator | Tuesday 02 September 2025 00:24:31 +0000 (0:00:00.329) 0:00:47.343 ***** 2025-09-02 00:24:50.114723 | orchestrator | changed: [testbed-manager] 2025-09-02 00:24:50.114732 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:50.114742 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:50.114751 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:50.114761 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:24:50.114770 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:24:50.114779 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:24:50.114789 | orchestrator | 2025-09-02 00:24:50.114815 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-02 00:24:50.114826 | orchestrator | Tuesday 02 September 2025 00:24:32 +0000 (0:00:01.017) 0:00:48.361 ***** 2025-09-02 00:24:50.114836 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:24:50.114845 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:24:50.114855 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:24:50.114864 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:24:50.114874 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:24:50.114883 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:24:50.114893 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:24:50.114902 | orchestrator | 2025-09-02 00:24:50.114911 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-02 00:24:50.114921 | orchestrator | Tuesday 02 September 2025 00:24:32 +0000 (0:00:00.324) 0:00:48.686 ***** 2025-09-02 00:24:50.114930 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:50.114940 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:24:50.114949 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:50.114959 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:50.114968 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:24:50.114977 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:24:50.114987 | orchestrator | changed: [testbed-manager] 2025-09-02 00:24:50.115003 | orchestrator | 2025-09-02 00:24:50.115013 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-02 00:24:50.115022 | orchestrator | Tuesday 02 September 2025 00:24:44 +0000 (0:00:12.210) 0:01:00.896 ***** 2025-09-02 00:24:50.115032 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.115041 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.115051 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.115060 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.115070 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.115079 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.115089 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.115098 | orchestrator | 2025-09-02 00:24:50.115108 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-02 00:24:50.115117 | orchestrator | Tuesday 02 September 2025 00:24:45 +0000 (0:00:00.763) 0:01:01.660 ***** 2025-09-02 00:24:50.115127 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.115136 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.115146 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.115155 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.115164 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.115173 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.115183 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.115192 | orchestrator | 2025-09-02 00:24:50.115201 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-02 00:24:50.115211 | orchestrator | Tuesday 02 September 2025 00:24:46 +0000 (0:00:00.983) 0:01:02.643 ***** 2025-09-02 00:24:50.115220 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.115230 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.115239 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.115248 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.115258 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.115268 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.115277 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.115286 | orchestrator | 2025-09-02 00:24:50.115296 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-02 00:24:50.115306 | orchestrator | Tuesday 02 September 2025 00:24:46 +0000 (0:00:00.258) 0:01:02.902 ***** 2025-09-02 00:24:50.115316 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.115325 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.115334 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.115344 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.115353 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.115362 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.115372 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.115381 | orchestrator | 2025-09-02 00:24:50.115390 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-02 00:24:50.115400 | orchestrator | Tuesday 02 September 2025 00:24:47 +0000 (0:00:00.227) 0:01:03.130 ***** 2025-09-02 00:24:50.115410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:24:50.115420 | orchestrator | 2025-09-02 00:24:50.115430 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-02 00:24:50.115440 | orchestrator | Tuesday 02 September 2025 00:24:47 +0000 (0:00:00.279) 0:01:03.409 ***** 2025-09-02 00:24:50.115449 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.115459 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.115468 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.115477 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.115487 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.115496 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.115505 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.115515 | orchestrator | 2025-09-02 00:24:50.115524 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-02 00:24:50.115541 | orchestrator | Tuesday 02 September 2025 00:24:49 +0000 (0:00:01.850) 0:01:05.259 ***** 2025-09-02 00:24:50.115550 | orchestrator | changed: [testbed-manager] 2025-09-02 00:24:50.115560 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:24:50.115583 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:24:50.115594 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:24:50.115603 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:24:50.115613 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:24:50.115622 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:24:50.115632 | orchestrator | 2025-09-02 00:24:50.115641 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-02 00:24:50.115651 | orchestrator | Tuesday 02 September 2025 00:24:49 +0000 (0:00:00.599) 0:01:05.859 ***** 2025-09-02 00:24:50.115660 | orchestrator | ok: [testbed-manager] 2025-09-02 00:24:50.115670 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:24:50.115679 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:24:50.115689 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:24:50.115698 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:24:50.115708 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:24:50.115717 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:24:50.115726 | orchestrator | 2025-09-02 00:24:50.115742 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-02 00:27:16.103177 | orchestrator | Tuesday 02 September 2025 00:24:50 +0000 (0:00:00.259) 0:01:06.118 ***** 2025-09-02 00:27:16.103301 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:16.103317 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:16.103329 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:16.103340 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:16.103350 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:16.103361 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:16.103391 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:16.103402 | orchestrator | 2025-09-02 00:27:16.103414 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-02 00:27:16.103479 | orchestrator | Tuesday 02 September 2025 00:24:51 +0000 (0:00:01.287) 0:01:07.406 ***** 2025-09-02 00:27:16.103490 | orchestrator | changed: [testbed-manager] 2025-09-02 00:27:16.103503 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:27:16.103514 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:27:16.103524 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:27:16.103535 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:27:16.103546 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:27:16.103557 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:27:16.103568 | orchestrator | 2025-09-02 00:27:16.103580 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-02 00:27:16.103591 | orchestrator | Tuesday 02 September 2025 00:24:53 +0000 (0:00:01.757) 0:01:09.164 ***** 2025-09-02 00:27:16.103602 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:16.103613 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:16.103624 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:16.103635 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:16.103646 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:16.103657 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:16.103668 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:16.103679 | orchestrator | 2025-09-02 00:27:16.103690 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-02 00:27:16.103701 | orchestrator | Tuesday 02 September 2025 00:24:55 +0000 (0:00:02.608) 0:01:11.772 ***** 2025-09-02 00:27:16.103714 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:16.103726 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:16.103739 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:16.103751 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:16.103764 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:16.103776 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:16.103789 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:16.103801 | orchestrator | 2025-09-02 00:27:16.103814 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-02 00:27:16.103851 | orchestrator | Tuesday 02 September 2025 00:25:36 +0000 (0:00:41.033) 0:01:52.805 ***** 2025-09-02 00:27:16.103865 | orchestrator | changed: [testbed-manager] 2025-09-02 00:27:16.103877 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:27:16.103890 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:27:16.103902 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:27:16.103914 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:27:16.103927 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:27:16.103940 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:27:16.103952 | orchestrator | 2025-09-02 00:27:16.103965 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-02 00:27:16.103978 | orchestrator | Tuesday 02 September 2025 00:26:55 +0000 (0:01:18.790) 0:03:11.596 ***** 2025-09-02 00:27:16.103990 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:16.104003 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:16.104016 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:16.104029 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:16.104042 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:16.104054 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:16.104067 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:16.104078 | orchestrator | 2025-09-02 00:27:16.104089 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-02 00:27:16.104101 | orchestrator | Tuesday 02 September 2025 00:26:57 +0000 (0:00:01.774) 0:03:13.370 ***** 2025-09-02 00:27:16.104112 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:16.104123 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:16.104139 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:16.104150 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:16.104161 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:16.104172 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:16.104183 | orchestrator | changed: [testbed-manager] 2025-09-02 00:27:16.104194 | orchestrator | 2025-09-02 00:27:16.104205 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-02 00:27:16.104217 | orchestrator | Tuesday 02 September 2025 00:27:09 +0000 (0:00:12.071) 0:03:25.442 ***** 2025-09-02 00:27:16.104231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-02 00:27:16.104257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-02 00:27:16.104293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-02 00:27:16.104307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-02 00:27:16.104328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-02 00:27:16.104340 | orchestrator | 2025-09-02 00:27:16.104351 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-02 00:27:16.104362 | orchestrator | Tuesday 02 September 2025 00:27:09 +0000 (0:00:00.416) 0:03:25.858 ***** 2025-09-02 00:27:16.104373 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-02 00:27:16.104384 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:27:16.104396 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-02 00:27:16.104407 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-02 00:27:16.104437 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:27:16.104449 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:27:16.104460 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-02 00:27:16.104470 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:27:16.104481 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-02 00:27:16.104492 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-02 00:27:16.104503 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-02 00:27:16.104513 | orchestrator | 2025-09-02 00:27:16.104524 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-02 00:27:16.104535 | orchestrator | Tuesday 02 September 2025 00:27:10 +0000 (0:00:00.699) 0:03:26.558 ***** 2025-09-02 00:27:16.104545 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-02 00:27:16.104557 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-02 00:27:16.104568 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-02 00:27:16.104579 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-02 00:27:16.104589 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-02 00:27:16.104605 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-02 00:27:16.104616 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-02 00:27:16.104627 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-02 00:27:16.104638 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-02 00:27:16.104648 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-02 00:27:16.104659 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:27:16.104670 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-02 00:27:16.104681 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-02 00:27:16.104691 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-02 00:27:16.104702 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-02 00:27:16.104713 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-02 00:27:16.104723 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-02 00:27:16.104741 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-02 00:27:16.104752 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-02 00:27:16.104762 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-02 00:27:16.104773 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-02 00:27:16.104791 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-02 00:27:19.205716 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-02 00:27:19.205824 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-02 00:27:19.205839 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:27:19.205852 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-02 00:27:19.205864 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-02 00:27:19.205875 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-02 00:27:19.205886 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-02 00:27:19.205897 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-02 00:27:19.205908 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-02 00:27:19.205919 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-02 00:27:19.205930 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-02 00:27:19.205941 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-02 00:27:19.205952 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-02 00:27:19.205963 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-02 00:27:19.205974 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-02 00:27:19.205985 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-02 00:27:19.205996 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:27:19.206007 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-02 00:27:19.206077 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-02 00:27:19.206090 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-02 00:27:19.206101 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-02 00:27:19.206112 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:27:19.206123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-02 00:27:19.206134 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-02 00:27:19.206145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-02 00:27:19.206155 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-02 00:27:19.206166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-02 00:27:19.206177 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-02 00:27:19.206214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-02 00:27:19.206225 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-02 00:27:19.206236 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-02 00:27:19.206249 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-02 00:27:19.206263 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-02 00:27:19.206275 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-02 00:27:19.206288 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-02 00:27:19.206300 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-02 00:27:19.206312 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-02 00:27:19.206325 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-02 00:27:19.206337 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-02 00:27:19.206350 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-02 00:27:19.206362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-02 00:27:19.206375 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-02 00:27:19.206387 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-02 00:27:19.206447 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-02 00:27:19.206469 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-02 00:27:19.206489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-02 00:27:19.206508 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-02 00:27:19.206529 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-02 00:27:19.206545 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-02 00:27:19.206558 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-02 00:27:19.206570 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-02 00:27:19.206582 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-02 00:27:19.206595 | orchestrator | 2025-09-02 00:27:19.206607 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-02 00:27:19.206618 | orchestrator | Tuesday 02 September 2025 00:27:16 +0000 (0:00:05.546) 0:03:32.104 ***** 2025-09-02 00:27:19.206629 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-02 00:27:19.206658 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-02 00:27:19.206669 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-02 00:27:19.206680 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-02 00:27:19.206691 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-02 00:27:19.206701 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-02 00:27:19.206712 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-02 00:27:19.206733 | orchestrator | 2025-09-02 00:27:19.206745 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-02 00:27:19.206755 | orchestrator | Tuesday 02 September 2025 00:27:17 +0000 (0:00:01.538) 0:03:33.643 ***** 2025-09-02 00:27:19.206766 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-02 00:27:19.206777 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-02 00:27:19.206788 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:27:19.206799 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-02 00:27:19.206810 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:27:19.206821 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-02 00:27:19.206832 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:27:19.206843 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:27:19.206854 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-02 00:27:19.206870 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-02 00:27:19.206881 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-02 00:27:19.206892 | orchestrator | 2025-09-02 00:27:19.206902 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-02 00:27:19.206913 | orchestrator | Tuesday 02 September 2025 00:27:18 +0000 (0:00:00.577) 0:03:34.220 ***** 2025-09-02 00:27:19.206924 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-02 00:27:19.206934 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:27:19.206945 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-02 00:27:19.206956 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:27:19.206967 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-02 00:27:19.206977 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:27:19.206988 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-02 00:27:19.206998 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:27:19.207009 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-02 00:27:19.207020 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-02 00:27:19.207031 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-02 00:27:19.207041 | orchestrator | 2025-09-02 00:27:19.207052 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-02 00:27:19.207063 | orchestrator | Tuesday 02 September 2025 00:27:18 +0000 (0:00:00.700) 0:03:34.920 ***** 2025-09-02 00:27:19.207074 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:27:19.207084 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:27:19.207095 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:27:19.207106 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:27:19.207116 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:27:19.207135 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:27:30.846287 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:27:30.846409 | orchestrator | 2025-09-02 00:27:30.846427 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-02 00:27:30.846497 | orchestrator | Tuesday 02 September 2025 00:27:19 +0000 (0:00:00.287) 0:03:35.208 ***** 2025-09-02 00:27:30.846510 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:30.846522 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:30.846534 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:30.846568 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:30.846581 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:30.846591 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:30.846602 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:30.846613 | orchestrator | 2025-09-02 00:27:30.846624 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-02 00:27:30.846635 | orchestrator | Tuesday 02 September 2025 00:27:25 +0000 (0:00:05.815) 0:03:41.023 ***** 2025-09-02 00:27:30.846647 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-02 00:27:30.846658 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-02 00:27:30.846669 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:27:30.846680 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-02 00:27:30.846691 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:27:30.846701 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:27:30.846712 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-02 00:27:30.846723 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:27:30.846734 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-02 00:27:30.846745 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-02 00:27:30.846760 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:27:30.846771 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:27:30.846782 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-02 00:27:30.846792 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:27:30.846803 | orchestrator | 2025-09-02 00:27:30.846816 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-02 00:27:30.846830 | orchestrator | Tuesday 02 September 2025 00:27:25 +0000 (0:00:00.316) 0:03:41.340 ***** 2025-09-02 00:27:30.846843 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-02 00:27:30.846856 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-02 00:27:30.846868 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-02 00:27:30.846881 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-02 00:27:30.846893 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-02 00:27:30.846906 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-02 00:27:30.846918 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-02 00:27:30.846930 | orchestrator | 2025-09-02 00:27:30.846942 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-02 00:27:30.846955 | orchestrator | Tuesday 02 September 2025 00:27:26 +0000 (0:00:01.010) 0:03:42.351 ***** 2025-09-02 00:27:30.846969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:27:30.846984 | orchestrator | 2025-09-02 00:27:30.846997 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-02 00:27:30.847009 | orchestrator | Tuesday 02 September 2025 00:27:26 +0000 (0:00:00.511) 0:03:42.863 ***** 2025-09-02 00:27:30.847022 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:30.847035 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:30.847048 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:30.847061 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:30.847073 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:30.847086 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:30.847114 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:30.847127 | orchestrator | 2025-09-02 00:27:30.847139 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-02 00:27:30.847152 | orchestrator | Tuesday 02 September 2025 00:27:28 +0000 (0:00:01.162) 0:03:44.026 ***** 2025-09-02 00:27:30.847166 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:30.847179 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:30.847190 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:30.847201 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:30.847211 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:30.847222 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:30.847238 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:30.847248 | orchestrator | 2025-09-02 00:27:30.847259 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-02 00:27:30.847270 | orchestrator | Tuesday 02 September 2025 00:27:28 +0000 (0:00:00.629) 0:03:44.655 ***** 2025-09-02 00:27:30.847280 | orchestrator | changed: [testbed-manager] 2025-09-02 00:27:30.847291 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:27:30.847302 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:27:30.847312 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:27:30.847323 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:27:30.847333 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:27:30.847344 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:27:30.847354 | orchestrator | 2025-09-02 00:27:30.847365 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-02 00:27:30.847376 | orchestrator | Tuesday 02 September 2025 00:27:29 +0000 (0:00:00.624) 0:03:45.279 ***** 2025-09-02 00:27:30.847386 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:30.847397 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:30.847408 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:30.847418 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:30.847448 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:30.847459 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:30.847470 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:30.847480 | orchestrator | 2025-09-02 00:27:30.847491 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-02 00:27:30.847502 | orchestrator | Tuesday 02 September 2025 00:27:29 +0000 (0:00:00.602) 0:03:45.882 ***** 2025-09-02 00:27:30.847537 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756771426.459173, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:30.847553 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756771458.7095954, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:30.847566 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756771461.8967516, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:30.847578 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756771462.9587288, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:30.847603 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756771463.9598777, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:30.847615 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756771469.0880036, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:30.847627 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756771460.2691429, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:30.847656 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:46.891062 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:46.891186 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:46.891203 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:46.891239 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:46.891251 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:46.891263 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 00:27:46.891275 | orchestrator | 2025-09-02 00:27:46.891289 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-02 00:27:46.891319 | orchestrator | Tuesday 02 September 2025 00:27:30 +0000 (0:00:00.963) 0:03:46.845 ***** 2025-09-02 00:27:46.891332 | orchestrator | changed: [testbed-manager] 2025-09-02 00:27:46.891344 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:27:46.891355 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:27:46.891366 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:27:46.891376 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:27:46.891387 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:27:46.891397 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:27:46.891408 | orchestrator | 2025-09-02 00:27:46.891419 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-02 00:27:46.891430 | orchestrator | Tuesday 02 September 2025 00:27:31 +0000 (0:00:01.126) 0:03:47.972 ***** 2025-09-02 00:27:46.891441 | orchestrator | changed: [testbed-manager] 2025-09-02 00:27:46.891452 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:27:46.891463 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:27:46.891499 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:27:46.891530 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:27:46.891542 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:27:46.891553 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:27:46.891563 | orchestrator | 2025-09-02 00:27:46.891574 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-02 00:27:46.891588 | orchestrator | Tuesday 02 September 2025 00:27:33 +0000 (0:00:01.142) 0:03:49.114 ***** 2025-09-02 00:27:46.891600 | orchestrator | changed: [testbed-manager] 2025-09-02 00:27:46.891613 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:27:46.891625 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:27:46.891637 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:27:46.891649 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:27:46.891662 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:27:46.891674 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:27:46.891687 | orchestrator | 2025-09-02 00:27:46.891700 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-02 00:27:46.891712 | orchestrator | Tuesday 02 September 2025 00:27:34 +0000 (0:00:01.166) 0:03:50.280 ***** 2025-09-02 00:27:46.891735 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:27:46.891748 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:27:46.891760 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:27:46.891772 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:27:46.891784 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:27:46.891797 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:27:46.891809 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:27:46.891822 | orchestrator | 2025-09-02 00:27:46.891835 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-02 00:27:46.891848 | orchestrator | Tuesday 02 September 2025 00:27:34 +0000 (0:00:00.304) 0:03:50.585 ***** 2025-09-02 00:27:46.891861 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:46.891874 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:46.891887 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:46.891899 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:46.891912 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:46.891925 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:46.891937 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:46.891949 | orchestrator | 2025-09-02 00:27:46.891960 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-02 00:27:46.891971 | orchestrator | Tuesday 02 September 2025 00:27:35 +0000 (0:00:00.729) 0:03:51.314 ***** 2025-09-02 00:27:46.891984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:27:46.891997 | orchestrator | 2025-09-02 00:27:46.892008 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-02 00:27:46.892019 | orchestrator | Tuesday 02 September 2025 00:27:35 +0000 (0:00:00.397) 0:03:51.711 ***** 2025-09-02 00:27:46.892030 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:46.892041 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:27:46.892052 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:27:46.892062 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:27:46.892073 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:27:46.892083 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:27:46.892094 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:27:46.892105 | orchestrator | 2025-09-02 00:27:46.892115 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-02 00:27:46.892126 | orchestrator | Tuesday 02 September 2025 00:27:43 +0000 (0:00:07.871) 0:03:59.583 ***** 2025-09-02 00:27:46.892142 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:46.892153 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:46.892164 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:46.892174 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:46.892185 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:46.892196 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:46.892207 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:46.892218 | orchestrator | 2025-09-02 00:27:46.892229 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-02 00:27:46.892240 | orchestrator | Tuesday 02 September 2025 00:27:44 +0000 (0:00:01.269) 0:04:00.852 ***** 2025-09-02 00:27:46.892250 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:46.892261 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:46.892272 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:46.892282 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:46.892293 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:46.892304 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:46.892314 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:46.892325 | orchestrator | 2025-09-02 00:27:46.892336 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-02 00:27:46.892347 | orchestrator | Tuesday 02 September 2025 00:27:45 +0000 (0:00:01.030) 0:04:01.883 ***** 2025-09-02 00:27:46.892357 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:46.892376 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:46.892387 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:46.892397 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:46.892408 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:46.892418 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:46.892429 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:46.892440 | orchestrator | 2025-09-02 00:27:46.892451 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-02 00:27:46.892462 | orchestrator | Tuesday 02 September 2025 00:27:46 +0000 (0:00:00.288) 0:04:02.171 ***** 2025-09-02 00:27:46.892500 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:46.892513 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:46.892523 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:46.892533 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:46.892544 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:46.892554 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:27:46.892565 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:27:46.892575 | orchestrator | 2025-09-02 00:27:46.892586 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-02 00:27:46.892597 | orchestrator | Tuesday 02 September 2025 00:27:46 +0000 (0:00:00.423) 0:04:02.594 ***** 2025-09-02 00:27:46.892607 | orchestrator | ok: [testbed-manager] 2025-09-02 00:27:46.892618 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:27:46.892629 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:27:46.892639 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:27:46.892650 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:27:46.892667 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:28:59.624714 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:28:59.624840 | orchestrator | 2025-09-02 00:28:59.624858 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-02 00:28:59.624873 | orchestrator | Tuesday 02 September 2025 00:27:46 +0000 (0:00:00.301) 0:04:02.896 ***** 2025-09-02 00:28:59.624884 | orchestrator | ok: [testbed-manager] 2025-09-02 00:28:59.624895 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:28:59.624906 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:28:59.624917 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:28:59.624928 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:28:59.624938 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:28:59.624949 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:28:59.624960 | orchestrator | 2025-09-02 00:28:59.624971 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-02 00:28:59.624982 | orchestrator | Tuesday 02 September 2025 00:27:52 +0000 (0:00:05.694) 0:04:08.590 ***** 2025-09-02 00:28:59.624995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:28:59.625009 | orchestrator | 2025-09-02 00:28:59.625020 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-02 00:28:59.625031 | orchestrator | Tuesday 02 September 2025 00:27:52 +0000 (0:00:00.385) 0:04:08.975 ***** 2025-09-02 00:28:59.625043 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-02 00:28:59.625053 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-02 00:28:59.625064 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-02 00:28:59.625075 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-02 00:28:59.625086 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:28:59.625097 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-02 00:28:59.625107 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:28:59.625118 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-02 00:28:59.625128 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-02 00:28:59.625139 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-02 00:28:59.625173 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:28:59.625184 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-02 00:28:59.625195 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-02 00:28:59.625206 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:28:59.625216 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-02 00:28:59.625227 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-02 00:28:59.625238 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:28:59.625248 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:28:59.625259 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-02 00:28:59.625269 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-02 00:28:59.625280 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:28:59.625290 | orchestrator | 2025-09-02 00:28:59.625301 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-02 00:28:59.625311 | orchestrator | Tuesday 02 September 2025 00:27:53 +0000 (0:00:00.361) 0:04:09.337 ***** 2025-09-02 00:28:59.625338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:28:59.625349 | orchestrator | 2025-09-02 00:28:59.625361 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-02 00:28:59.625372 | orchestrator | Tuesday 02 September 2025 00:27:53 +0000 (0:00:00.411) 0:04:09.748 ***** 2025-09-02 00:28:59.625382 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-02 00:28:59.625393 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-02 00:28:59.625403 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:28:59.625414 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-02 00:28:59.625424 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:28:59.625435 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-02 00:28:59.625445 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:28:59.625456 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-02 00:28:59.625466 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:28:59.625477 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-02 00:28:59.625487 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:28:59.625498 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:28:59.625508 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-02 00:28:59.625519 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:28:59.625529 | orchestrator | 2025-09-02 00:28:59.625540 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-02 00:28:59.625572 | orchestrator | Tuesday 02 September 2025 00:27:54 +0000 (0:00:00.332) 0:04:10.081 ***** 2025-09-02 00:28:59.625583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:28:59.625594 | orchestrator | 2025-09-02 00:28:59.625605 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-02 00:28:59.625615 | orchestrator | Tuesday 02 September 2025 00:27:54 +0000 (0:00:00.456) 0:04:10.537 ***** 2025-09-02 00:28:59.625626 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:28:59.625655 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:28:59.625667 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:28:59.625677 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:28:59.625688 | orchestrator | changed: [testbed-manager] 2025-09-02 00:28:59.625699 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:28:59.625710 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:28:59.625729 | orchestrator | 2025-09-02 00:28:59.625740 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-02 00:28:59.625751 | orchestrator | Tuesday 02 September 2025 00:28:30 +0000 (0:00:35.513) 0:04:46.051 ***** 2025-09-02 00:28:59.625762 | orchestrator | changed: [testbed-manager] 2025-09-02 00:28:59.625773 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:28:59.625784 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:28:59.625794 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:28:59.625805 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:28:59.625816 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:28:59.625827 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:28:59.625837 | orchestrator | 2025-09-02 00:28:59.625848 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-02 00:28:59.625859 | orchestrator | Tuesday 02 September 2025 00:28:38 +0000 (0:00:08.603) 0:04:54.655 ***** 2025-09-02 00:28:59.625870 | orchestrator | changed: [testbed-manager] 2025-09-02 00:28:59.625880 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:28:59.625891 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:28:59.625902 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:28:59.625912 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:28:59.625923 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:28:59.625934 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:28:59.625945 | orchestrator | 2025-09-02 00:28:59.625955 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-02 00:28:59.625966 | orchestrator | Tuesday 02 September 2025 00:28:46 +0000 (0:00:08.229) 0:05:02.884 ***** 2025-09-02 00:28:59.625977 | orchestrator | ok: [testbed-manager] 2025-09-02 00:28:59.625988 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:28:59.625999 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:28:59.626010 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:28:59.626087 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:28:59.626107 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:28:59.626176 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:28:59.626196 | orchestrator | 2025-09-02 00:28:59.626215 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-02 00:28:59.626227 | orchestrator | Tuesday 02 September 2025 00:28:48 +0000 (0:00:01.776) 0:05:04.660 ***** 2025-09-02 00:28:59.626237 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:28:59.626248 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:28:59.626258 | orchestrator | changed: [testbed-manager] 2025-09-02 00:28:59.626269 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:28:59.626279 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:28:59.626289 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:28:59.626300 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:28:59.626310 | orchestrator | 2025-09-02 00:28:59.626321 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-02 00:28:59.626331 | orchestrator | Tuesday 02 September 2025 00:28:54 +0000 (0:00:06.236) 0:05:10.897 ***** 2025-09-02 00:28:59.626343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:28:59.626355 | orchestrator | 2025-09-02 00:28:59.626373 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-02 00:28:59.626385 | orchestrator | Tuesday 02 September 2025 00:28:55 +0000 (0:00:00.545) 0:05:11.442 ***** 2025-09-02 00:28:59.626395 | orchestrator | changed: [testbed-manager] 2025-09-02 00:28:59.626406 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:28:59.626416 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:28:59.626426 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:28:59.626436 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:28:59.626447 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:28:59.626457 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:28:59.626468 | orchestrator | 2025-09-02 00:28:59.626478 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-02 00:28:59.626500 | orchestrator | Tuesday 02 September 2025 00:28:56 +0000 (0:00:00.737) 0:05:12.180 ***** 2025-09-02 00:28:59.626511 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:28:59.626521 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:28:59.626532 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:28:59.626564 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:28:59.626576 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:28:59.626586 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:28:59.626597 | orchestrator | ok: [testbed-manager] 2025-09-02 00:28:59.626607 | orchestrator | 2025-09-02 00:28:59.626618 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-02 00:28:59.626629 | orchestrator | Tuesday 02 September 2025 00:28:58 +0000 (0:00:02.386) 0:05:14.567 ***** 2025-09-02 00:28:59.626639 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:28:59.626650 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:28:59.626660 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:28:59.626671 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:28:59.626681 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:28:59.626692 | orchestrator | changed: [testbed-manager] 2025-09-02 00:28:59.626702 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:28:59.626713 | orchestrator | 2025-09-02 00:28:59.626723 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-02 00:28:59.626734 | orchestrator | Tuesday 02 September 2025 00:28:59 +0000 (0:00:00.782) 0:05:15.350 ***** 2025-09-02 00:28:59.626744 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:28:59.626755 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:28:59.626765 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:28:59.626776 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:28:59.626786 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:28:59.626797 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:28:59.626807 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:28:59.626818 | orchestrator | 2025-09-02 00:28:59.626829 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-02 00:28:59.626850 | orchestrator | Tuesday 02 September 2025 00:28:59 +0000 (0:00:00.275) 0:05:15.626 ***** 2025-09-02 00:29:27.403171 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:29:27.403297 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:29:27.403313 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:29:27.403325 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:29:27.403336 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:29:27.403347 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:29:27.403358 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:29:27.403369 | orchestrator | 2025-09-02 00:29:27.403382 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-02 00:29:27.403395 | orchestrator | Tuesday 02 September 2025 00:29:00 +0000 (0:00:00.397) 0:05:16.023 ***** 2025-09-02 00:29:27.403406 | orchestrator | ok: [testbed-manager] 2025-09-02 00:29:27.403418 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:29:27.403429 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:29:27.403440 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:29:27.403451 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:29:27.403461 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:29:27.403472 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:29:27.403483 | orchestrator | 2025-09-02 00:29:27.403494 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-02 00:29:27.403505 | orchestrator | Tuesday 02 September 2025 00:29:00 +0000 (0:00:00.299) 0:05:16.323 ***** 2025-09-02 00:29:27.403516 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:29:27.403527 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:29:27.403594 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:29:27.403606 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:29:27.403616 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:29:27.403627 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:29:27.403661 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:29:27.403673 | orchestrator | 2025-09-02 00:29:27.403684 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-02 00:29:27.403696 | orchestrator | Tuesday 02 September 2025 00:29:00 +0000 (0:00:00.292) 0:05:16.616 ***** 2025-09-02 00:29:27.403709 | orchestrator | ok: [testbed-manager] 2025-09-02 00:29:27.403722 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:29:27.403735 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:29:27.403747 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:29:27.403760 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:29:27.403771 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:29:27.403784 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:29:27.403797 | orchestrator | 2025-09-02 00:29:27.403810 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-02 00:29:27.403823 | orchestrator | Tuesday 02 September 2025 00:29:00 +0000 (0:00:00.298) 0:05:16.915 ***** 2025-09-02 00:29:27.403836 | orchestrator | ok: [testbed-manager] =>  2025-09-02 00:29:27.403848 | orchestrator |  docker_version: 5:27.5.1 2025-09-02 00:29:27.403861 | orchestrator | ok: [testbed-node-0] =>  2025-09-02 00:29:27.403873 | orchestrator |  docker_version: 5:27.5.1 2025-09-02 00:29:27.403885 | orchestrator | ok: [testbed-node-1] =>  2025-09-02 00:29:27.403897 | orchestrator |  docker_version: 5:27.5.1 2025-09-02 00:29:27.403910 | orchestrator | ok: [testbed-node-2] =>  2025-09-02 00:29:27.403921 | orchestrator |  docker_version: 5:27.5.1 2025-09-02 00:29:27.403933 | orchestrator | ok: [testbed-node-3] =>  2025-09-02 00:29:27.403945 | orchestrator |  docker_version: 5:27.5.1 2025-09-02 00:29:27.403957 | orchestrator | ok: [testbed-node-4] =>  2025-09-02 00:29:27.403971 | orchestrator |  docker_version: 5:27.5.1 2025-09-02 00:29:27.403983 | orchestrator | ok: [testbed-node-5] =>  2025-09-02 00:29:27.403997 | orchestrator |  docker_version: 5:27.5.1 2025-09-02 00:29:27.404009 | orchestrator | 2025-09-02 00:29:27.404022 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-02 00:29:27.404034 | orchestrator | Tuesday 02 September 2025 00:29:01 +0000 (0:00:00.305) 0:05:17.221 ***** 2025-09-02 00:29:27.404048 | orchestrator | ok: [testbed-manager] =>  2025-09-02 00:29:27.404059 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-02 00:29:27.404069 | orchestrator | ok: [testbed-node-0] =>  2025-09-02 00:29:27.404080 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-02 00:29:27.404090 | orchestrator | ok: [testbed-node-1] =>  2025-09-02 00:29:27.404101 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-02 00:29:27.404111 | orchestrator | ok: [testbed-node-2] =>  2025-09-02 00:29:27.404122 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-02 00:29:27.404132 | orchestrator | ok: [testbed-node-3] =>  2025-09-02 00:29:27.404143 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-02 00:29:27.404154 | orchestrator | ok: [testbed-node-4] =>  2025-09-02 00:29:27.404164 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-02 00:29:27.404175 | orchestrator | ok: [testbed-node-5] =>  2025-09-02 00:29:27.404186 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-02 00:29:27.404196 | orchestrator | 2025-09-02 00:29:27.404207 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-02 00:29:27.404218 | orchestrator | Tuesday 02 September 2025 00:29:01 +0000 (0:00:00.313) 0:05:17.534 ***** 2025-09-02 00:29:27.404228 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:29:27.404239 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:29:27.404249 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:29:27.404260 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:29:27.404270 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:29:27.404281 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:29:27.404291 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:29:27.404302 | orchestrator | 2025-09-02 00:29:27.404312 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-02 00:29:27.404323 | orchestrator | Tuesday 02 September 2025 00:29:01 +0000 (0:00:00.280) 0:05:17.815 ***** 2025-09-02 00:29:27.404342 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:29:27.404353 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:29:27.404363 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:29:27.404374 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:29:27.404384 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:29:27.404395 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:29:27.404406 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:29:27.404416 | orchestrator | 2025-09-02 00:29:27.404446 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-02 00:29:27.404457 | orchestrator | Tuesday 02 September 2025 00:29:02 +0000 (0:00:00.285) 0:05:18.101 ***** 2025-09-02 00:29:27.404486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:29:27.404499 | orchestrator | 2025-09-02 00:29:27.404510 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-02 00:29:27.404521 | orchestrator | Tuesday 02 September 2025 00:29:02 +0000 (0:00:00.452) 0:05:18.553 ***** 2025-09-02 00:29:27.404532 | orchestrator | ok: [testbed-manager] 2025-09-02 00:29:27.404561 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:29:27.404572 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:29:27.404582 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:29:27.404593 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:29:27.404604 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:29:27.404615 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:29:27.404625 | orchestrator | 2025-09-02 00:29:27.404636 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-02 00:29:27.404647 | orchestrator | Tuesday 02 September 2025 00:29:03 +0000 (0:00:00.803) 0:05:19.357 ***** 2025-09-02 00:29:27.404657 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:29:27.404668 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:29:27.404679 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:29:27.404689 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:29:27.404700 | orchestrator | ok: [testbed-manager] 2025-09-02 00:29:27.404710 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:29:27.404721 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:29:27.404731 | orchestrator | 2025-09-02 00:29:27.404742 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-02 00:29:27.404754 | orchestrator | Tuesday 02 September 2025 00:29:06 +0000 (0:00:03.326) 0:05:22.684 ***** 2025-09-02 00:29:27.404765 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-02 00:29:27.404777 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-02 00:29:27.404787 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-02 00:29:27.404798 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-02 00:29:27.404809 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-02 00:29:27.404820 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-02 00:29:27.404831 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:29:27.404841 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-02 00:29:27.404852 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-02 00:29:27.404862 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-02 00:29:27.404873 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:29:27.404884 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-02 00:29:27.404894 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-02 00:29:27.404905 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-02 00:29:27.404916 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:29:27.404926 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-02 00:29:27.404937 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-02 00:29:27.404955 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-02 00:29:27.404966 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:29:27.404977 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-02 00:29:27.404987 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-02 00:29:27.404998 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-02 00:29:27.405008 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:29:27.405025 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:29:27.405036 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-02 00:29:27.405047 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-02 00:29:27.405058 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-02 00:29:27.405068 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:29:27.405079 | orchestrator | 2025-09-02 00:29:27.405090 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-02 00:29:27.405101 | orchestrator | Tuesday 02 September 2025 00:29:07 +0000 (0:00:00.582) 0:05:23.266 ***** 2025-09-02 00:29:27.405111 | orchestrator | ok: [testbed-manager] 2025-09-02 00:29:27.405122 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:29:27.405132 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:29:27.405143 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:29:27.405154 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:29:27.405164 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:29:27.405175 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:29:27.405185 | orchestrator | 2025-09-02 00:29:27.405196 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-02 00:29:27.405207 | orchestrator | Tuesday 02 September 2025 00:29:14 +0000 (0:00:07.177) 0:05:30.443 ***** 2025-09-02 00:29:27.405217 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:29:27.405228 | orchestrator | ok: [testbed-manager] 2025-09-02 00:29:27.405238 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:29:27.405249 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:29:27.405259 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:29:27.405270 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:29:27.405280 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:29:27.405291 | orchestrator | 2025-09-02 00:29:27.405302 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-02 00:29:27.405313 | orchestrator | Tuesday 02 September 2025 00:29:15 +0000 (0:00:01.293) 0:05:31.737 ***** 2025-09-02 00:29:27.405323 | orchestrator | ok: [testbed-manager] 2025-09-02 00:29:27.405334 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:29:27.405344 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:29:27.405355 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:29:27.405365 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:29:27.405376 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:29:27.405386 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:29:27.405397 | orchestrator | 2025-09-02 00:29:27.405408 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-02 00:29:27.405418 | orchestrator | Tuesday 02 September 2025 00:29:24 +0000 (0:00:08.333) 0:05:40.071 ***** 2025-09-02 00:29:27.405429 | orchestrator | changed: [testbed-manager] 2025-09-02 00:29:27.405440 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:29:27.405450 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:29:27.405468 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:13.097147 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:13.097244 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:13.097259 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:13.097270 | orchestrator | 2025-09-02 00:30:13.097284 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-02 00:30:13.097296 | orchestrator | Tuesday 02 September 2025 00:29:27 +0000 (0:00:03.327) 0:05:43.399 ***** 2025-09-02 00:30:13.097307 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:13.097319 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:13.097349 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:13.097360 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:13.097371 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:13.097382 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:13.097392 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:13.097403 | orchestrator | 2025-09-02 00:30:13.097414 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-02 00:30:13.097425 | orchestrator | Tuesday 02 September 2025 00:29:28 +0000 (0:00:01.402) 0:05:44.802 ***** 2025-09-02 00:30:13.097435 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:13.097446 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:13.097456 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:13.097467 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:13.097478 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:13.097550 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:13.097575 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:13.097589 | orchestrator | 2025-09-02 00:30:13.097599 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-02 00:30:13.097611 | orchestrator | Tuesday 02 September 2025 00:29:30 +0000 (0:00:01.381) 0:05:46.183 ***** 2025-09-02 00:30:13.097621 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:30:13.097632 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:30:13.097643 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:30:13.097653 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:30:13.097664 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:30:13.097674 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:30:13.097685 | orchestrator | changed: [testbed-manager] 2025-09-02 00:30:13.097696 | orchestrator | 2025-09-02 00:30:13.097709 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-02 00:30:13.097722 | orchestrator | Tuesday 02 September 2025 00:29:31 +0000 (0:00:00.871) 0:05:47.054 ***** 2025-09-02 00:30:13.097736 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:13.097749 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:13.097762 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:13.097775 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:13.097788 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:13.097801 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:13.097814 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:13.097827 | orchestrator | 2025-09-02 00:30:13.097840 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-02 00:30:13.097853 | orchestrator | Tuesday 02 September 2025 00:29:41 +0000 (0:00:10.007) 0:05:57.062 ***** 2025-09-02 00:30:13.097865 | orchestrator | changed: [testbed-manager] 2025-09-02 00:30:13.097877 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:13.097890 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:13.097902 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:13.097915 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:13.097927 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:13.097940 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:13.097953 | orchestrator | 2025-09-02 00:30:13.097965 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-02 00:30:13.097992 | orchestrator | Tuesday 02 September 2025 00:29:42 +0000 (0:00:01.016) 0:05:58.079 ***** 2025-09-02 00:30:13.098005 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:13.098065 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:13.098079 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:13.098092 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:13.098102 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:13.098113 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:13.098124 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:13.098134 | orchestrator | 2025-09-02 00:30:13.098145 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-02 00:30:13.098156 | orchestrator | Tuesday 02 September 2025 00:29:51 +0000 (0:00:09.607) 0:06:07.686 ***** 2025-09-02 00:30:13.098176 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:13.098187 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:13.098198 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:13.098209 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:13.098220 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:13.098230 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:13.098241 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:13.098252 | orchestrator | 2025-09-02 00:30:13.098262 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-02 00:30:13.098273 | orchestrator | Tuesday 02 September 2025 00:30:02 +0000 (0:00:11.147) 0:06:18.834 ***** 2025-09-02 00:30:13.098284 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-02 00:30:13.098296 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-02 00:30:13.098307 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-02 00:30:13.098317 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-02 00:30:13.098328 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-02 00:30:13.098339 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-02 00:30:13.098350 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-02 00:30:13.098361 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-02 00:30:13.098371 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-02 00:30:13.098382 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-02 00:30:13.098393 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-02 00:30:13.098404 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-02 00:30:13.098415 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-02 00:30:13.098426 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-02 00:30:13.098437 | orchestrator | 2025-09-02 00:30:13.098448 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-02 00:30:13.098475 | orchestrator | Tuesday 02 September 2025 00:30:04 +0000 (0:00:01.207) 0:06:20.041 ***** 2025-09-02 00:30:13.098487 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:30:13.098521 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:30:13.098532 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:30:13.098542 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:30:13.098553 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:30:13.098564 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:30:13.098574 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:30:13.098585 | orchestrator | 2025-09-02 00:30:13.098595 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-02 00:30:13.098606 | orchestrator | Tuesday 02 September 2025 00:30:04 +0000 (0:00:00.645) 0:06:20.687 ***** 2025-09-02 00:30:13.098617 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:13.098628 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:13.098638 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:13.098649 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:13.098659 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:13.098670 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:13.098680 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:13.098691 | orchestrator | 2025-09-02 00:30:13.098702 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-02 00:30:13.098713 | orchestrator | Tuesday 02 September 2025 00:30:08 +0000 (0:00:03.697) 0:06:24.385 ***** 2025-09-02 00:30:13.098724 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:30:13.098735 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:30:13.098745 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:30:13.098756 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:30:13.098766 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:30:13.098777 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:30:13.098787 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:30:13.098805 | orchestrator | 2025-09-02 00:30:13.098816 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-02 00:30:13.098828 | orchestrator | Tuesday 02 September 2025 00:30:08 +0000 (0:00:00.535) 0:06:24.920 ***** 2025-09-02 00:30:13.098839 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-02 00:30:13.098850 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-02 00:30:13.098861 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:30:13.098871 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-02 00:30:13.098882 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-02 00:30:13.098892 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:30:13.098903 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-02 00:30:13.098913 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-02 00:30:13.098924 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:30:13.098935 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-02 00:30:13.098945 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-02 00:30:13.098956 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:30:13.098966 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-02 00:30:13.098977 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-02 00:30:13.098987 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:30:13.098998 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-02 00:30:13.099014 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-02 00:30:13.099025 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:30:13.099036 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-02 00:30:13.099046 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-02 00:30:13.099057 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:30:13.099067 | orchestrator | 2025-09-02 00:30:13.099078 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-02 00:30:13.099089 | orchestrator | Tuesday 02 September 2025 00:30:09 +0000 (0:00:00.850) 0:06:25.771 ***** 2025-09-02 00:30:13.099100 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:30:13.099110 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:30:13.099121 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:30:13.099131 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:30:13.099142 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:30:13.099152 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:30:13.099162 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:30:13.099173 | orchestrator | 2025-09-02 00:30:13.099183 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-02 00:30:13.099194 | orchestrator | Tuesday 02 September 2025 00:30:10 +0000 (0:00:00.576) 0:06:26.348 ***** 2025-09-02 00:30:13.099205 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:30:13.099215 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:30:13.099226 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:30:13.099236 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:30:13.099247 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:30:13.099257 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:30:13.099267 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:30:13.099278 | orchestrator | 2025-09-02 00:30:13.099289 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-02 00:30:13.099299 | orchestrator | Tuesday 02 September 2025 00:30:10 +0000 (0:00:00.579) 0:06:26.928 ***** 2025-09-02 00:30:13.099310 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:30:13.099321 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:30:13.099331 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:30:13.099342 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:30:13.099358 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:30:13.099369 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:30:13.099380 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:30:13.099390 | orchestrator | 2025-09-02 00:30:13.099401 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-02 00:30:13.099412 | orchestrator | Tuesday 02 September 2025 00:30:11 +0000 (0:00:00.560) 0:06:27.488 ***** 2025-09-02 00:30:13.099422 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:13.099440 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:30:35.924038 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:30:35.924167 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:30:35.924183 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:30:35.924194 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:30:35.924205 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:30:35.924217 | orchestrator | 2025-09-02 00:30:35.924229 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-02 00:30:35.924242 | orchestrator | Tuesday 02 September 2025 00:30:13 +0000 (0:00:01.612) 0:06:29.100 ***** 2025-09-02 00:30:35.924254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:30:35.924268 | orchestrator | 2025-09-02 00:30:35.924279 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-02 00:30:35.924291 | orchestrator | Tuesday 02 September 2025 00:30:14 +0000 (0:00:01.173) 0:06:30.274 ***** 2025-09-02 00:30:35.924302 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:35.924312 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:35.924325 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:35.924335 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:35.924346 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:35.924357 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:35.924368 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:35.924378 | orchestrator | 2025-09-02 00:30:35.924389 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-02 00:30:35.924400 | orchestrator | Tuesday 02 September 2025 00:30:15 +0000 (0:00:00.868) 0:06:31.143 ***** 2025-09-02 00:30:35.924411 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:35.924421 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:35.924432 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:35.924444 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:35.924455 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:35.924522 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:35.924536 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:35.924547 | orchestrator | 2025-09-02 00:30:35.924557 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-02 00:30:35.924571 | orchestrator | Tuesday 02 September 2025 00:30:16 +0000 (0:00:00.900) 0:06:32.044 ***** 2025-09-02 00:30:35.924584 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:35.924597 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:35.924610 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:35.924623 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:35.924635 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:35.924648 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:35.924660 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:35.924673 | orchestrator | 2025-09-02 00:30:35.924686 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-02 00:30:35.924699 | orchestrator | Tuesday 02 September 2025 00:30:17 +0000 (0:00:01.384) 0:06:33.428 ***** 2025-09-02 00:30:35.924712 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:30:35.924724 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:30:35.924737 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:30:35.924750 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:30:35.924763 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:30:35.924775 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:30:35.924810 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:30:35.924823 | orchestrator | 2025-09-02 00:30:35.924835 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-02 00:30:35.924848 | orchestrator | Tuesday 02 September 2025 00:30:18 +0000 (0:00:01.580) 0:06:35.009 ***** 2025-09-02 00:30:35.924860 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:35.924872 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:35.924885 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:35.924897 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:35.924910 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:35.924922 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:35.924933 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:35.924943 | orchestrator | 2025-09-02 00:30:35.924954 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-02 00:30:35.924964 | orchestrator | Tuesday 02 September 2025 00:30:20 +0000 (0:00:01.344) 0:06:36.353 ***** 2025-09-02 00:30:35.924975 | orchestrator | changed: [testbed-manager] 2025-09-02 00:30:35.924985 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:35.924996 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:35.925006 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:35.925016 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:35.925027 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:35.925037 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:35.925048 | orchestrator | 2025-09-02 00:30:35.925058 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-02 00:30:35.925069 | orchestrator | Tuesday 02 September 2025 00:30:21 +0000 (0:00:01.491) 0:06:37.845 ***** 2025-09-02 00:30:35.925080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:30:35.925091 | orchestrator | 2025-09-02 00:30:35.925101 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-02 00:30:35.925112 | orchestrator | Tuesday 02 September 2025 00:30:22 +0000 (0:00:01.153) 0:06:38.998 ***** 2025-09-02 00:30:35.925122 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:30:35.925134 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:35.925144 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:30:35.925155 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:30:35.925165 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:30:35.925176 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:30:35.925186 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:30:35.925197 | orchestrator | 2025-09-02 00:30:35.925207 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-02 00:30:35.925218 | orchestrator | Tuesday 02 September 2025 00:30:24 +0000 (0:00:01.384) 0:06:40.382 ***** 2025-09-02 00:30:35.925228 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:35.925239 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:30:35.925266 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:30:35.925278 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:30:35.925288 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:30:35.925298 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:30:35.925309 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:30:35.925319 | orchestrator | 2025-09-02 00:30:35.925330 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-02 00:30:35.925341 | orchestrator | Tuesday 02 September 2025 00:30:25 +0000 (0:00:01.118) 0:06:41.501 ***** 2025-09-02 00:30:35.925351 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:35.925362 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:30:35.925372 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:30:35.925383 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:30:35.925393 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:30:35.925403 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:30:35.925414 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:30:35.925424 | orchestrator | 2025-09-02 00:30:35.925443 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-02 00:30:35.925454 | orchestrator | Tuesday 02 September 2025 00:30:26 +0000 (0:00:01.193) 0:06:42.695 ***** 2025-09-02 00:30:35.925481 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:35.925493 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:30:35.925503 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:30:35.925514 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:30:35.925524 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:30:35.925535 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:30:35.925545 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:30:35.925556 | orchestrator | 2025-09-02 00:30:35.925567 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-02 00:30:35.925577 | orchestrator | Tuesday 02 September 2025 00:30:27 +0000 (0:00:01.149) 0:06:43.844 ***** 2025-09-02 00:30:35.925589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:30:35.925599 | orchestrator | 2025-09-02 00:30:35.925610 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-02 00:30:35.925621 | orchestrator | Tuesday 02 September 2025 00:30:29 +0000 (0:00:01.259) 0:06:45.104 ***** 2025-09-02 00:30:35.925631 | orchestrator | 2025-09-02 00:30:35.925642 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-02 00:30:35.925653 | orchestrator | Tuesday 02 September 2025 00:30:29 +0000 (0:00:00.041) 0:06:45.145 ***** 2025-09-02 00:30:35.925664 | orchestrator | 2025-09-02 00:30:35.925675 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-02 00:30:35.925685 | orchestrator | Tuesday 02 September 2025 00:30:29 +0000 (0:00:00.039) 0:06:45.185 ***** 2025-09-02 00:30:35.925696 | orchestrator | 2025-09-02 00:30:35.925706 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-02 00:30:35.925717 | orchestrator | Tuesday 02 September 2025 00:30:29 +0000 (0:00:00.049) 0:06:45.234 ***** 2025-09-02 00:30:35.925728 | orchestrator | 2025-09-02 00:30:35.925756 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-02 00:30:35.925767 | orchestrator | Tuesday 02 September 2025 00:30:29 +0000 (0:00:00.039) 0:06:45.274 ***** 2025-09-02 00:30:35.925778 | orchestrator | 2025-09-02 00:30:35.925789 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-02 00:30:35.925804 | orchestrator | Tuesday 02 September 2025 00:30:29 +0000 (0:00:00.040) 0:06:45.314 ***** 2025-09-02 00:30:35.925815 | orchestrator | 2025-09-02 00:30:35.925826 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-02 00:30:35.925837 | orchestrator | Tuesday 02 September 2025 00:30:29 +0000 (0:00:00.049) 0:06:45.363 ***** 2025-09-02 00:30:35.925847 | orchestrator | 2025-09-02 00:30:35.925858 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-02 00:30:35.925868 | orchestrator | Tuesday 02 September 2025 00:30:29 +0000 (0:00:00.040) 0:06:45.404 ***** 2025-09-02 00:30:35.925879 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:30:35.925890 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:30:35.925901 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:30:35.925911 | orchestrator | 2025-09-02 00:30:35.925922 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-02 00:30:35.925933 | orchestrator | Tuesday 02 September 2025 00:30:30 +0000 (0:00:01.222) 0:06:46.627 ***** 2025-09-02 00:30:35.925943 | orchestrator | changed: [testbed-manager] 2025-09-02 00:30:35.925954 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:35.925965 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:35.925975 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:35.925986 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:35.925996 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:35.926007 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:35.926063 | orchestrator | 2025-09-02 00:30:35.926085 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-02 00:30:35.926096 | orchestrator | Tuesday 02 September 2025 00:30:32 +0000 (0:00:01.429) 0:06:48.057 ***** 2025-09-02 00:30:35.926107 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:30:35.926118 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:35.926128 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:35.926138 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:35.926149 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:30:35.926160 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:30:35.926170 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:30:35.926181 | orchestrator | 2025-09-02 00:30:35.926192 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-02 00:30:35.926203 | orchestrator | Tuesday 02 September 2025 00:30:34 +0000 (0:00:02.712) 0:06:50.769 ***** 2025-09-02 00:30:35.926213 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:30:35.926224 | orchestrator | 2025-09-02 00:30:35.926234 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-02 00:30:35.926245 | orchestrator | Tuesday 02 September 2025 00:30:34 +0000 (0:00:00.095) 0:06:50.865 ***** 2025-09-02 00:30:35.926256 | orchestrator | ok: [testbed-manager] 2025-09-02 00:30:35.926267 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:30:35.926277 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:30:35.926288 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:30:35.926306 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:31:02.339906 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:31:02.340037 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:31:02.340054 | orchestrator | 2025-09-02 00:31:02.340068 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-02 00:31:02.340081 | orchestrator | Tuesday 02 September 2025 00:30:35 +0000 (0:00:01.060) 0:06:51.925 ***** 2025-09-02 00:31:02.340139 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:31:02.340152 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:31:02.340163 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:31:02.340175 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:31:02.340186 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:31:02.340196 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:31:02.340207 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:31:02.340218 | orchestrator | 2025-09-02 00:31:02.340230 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-02 00:31:02.340241 | orchestrator | Tuesday 02 September 2025 00:30:36 +0000 (0:00:00.563) 0:06:52.489 ***** 2025-09-02 00:31:02.340253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:31:02.340266 | orchestrator | 2025-09-02 00:31:02.340277 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-02 00:31:02.340288 | orchestrator | Tuesday 02 September 2025 00:30:37 +0000 (0:00:01.130) 0:06:53.619 ***** 2025-09-02 00:31:02.340299 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:02.340312 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:02.340322 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:02.340333 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:02.340344 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:02.340354 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:02.340365 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:02.340376 | orchestrator | 2025-09-02 00:31:02.340387 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-02 00:31:02.340397 | orchestrator | Tuesday 02 September 2025 00:30:38 +0000 (0:00:00.839) 0:06:54.459 ***** 2025-09-02 00:31:02.340408 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-02 00:31:02.340419 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-02 00:31:02.340431 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-02 00:31:02.340496 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-02 00:31:02.340511 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-02 00:31:02.340524 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-02 00:31:02.340537 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-02 00:31:02.340550 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-02 00:31:02.340564 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-02 00:31:02.340576 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-02 00:31:02.340588 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-02 00:31:02.340601 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-02 00:31:02.340614 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-02 00:31:02.340641 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-02 00:31:02.340655 | orchestrator | 2025-09-02 00:31:02.340669 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-02 00:31:02.340681 | orchestrator | Tuesday 02 September 2025 00:30:40 +0000 (0:00:02.488) 0:06:56.947 ***** 2025-09-02 00:31:02.340694 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:31:02.340706 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:31:02.340720 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:31:02.340732 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:31:02.340744 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:31:02.340757 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:31:02.340770 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:31:02.340782 | orchestrator | 2025-09-02 00:31:02.340795 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-02 00:31:02.340805 | orchestrator | Tuesday 02 September 2025 00:30:41 +0000 (0:00:00.550) 0:06:57.498 ***** 2025-09-02 00:31:02.340817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:31:02.340830 | orchestrator | 2025-09-02 00:31:02.340841 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-02 00:31:02.340855 | orchestrator | Tuesday 02 September 2025 00:30:42 +0000 (0:00:01.181) 0:06:58.679 ***** 2025-09-02 00:31:02.340866 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:02.340876 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:02.340887 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:02.340897 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:02.340908 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:02.340918 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:02.340929 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:02.340939 | orchestrator | 2025-09-02 00:31:02.340950 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-02 00:31:02.340961 | orchestrator | Tuesday 02 September 2025 00:30:43 +0000 (0:00:00.862) 0:06:59.542 ***** 2025-09-02 00:31:02.340972 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:02.340982 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:02.340992 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:02.341003 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:02.341013 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:02.341023 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:02.341034 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:02.341044 | orchestrator | 2025-09-02 00:31:02.341055 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-02 00:31:02.341084 | orchestrator | Tuesday 02 September 2025 00:30:44 +0000 (0:00:00.852) 0:07:00.395 ***** 2025-09-02 00:31:02.341096 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:31:02.341106 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:31:02.341117 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:31:02.341145 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:31:02.341157 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:31:02.341167 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:31:02.341178 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:31:02.341189 | orchestrator | 2025-09-02 00:31:02.341200 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-02 00:31:02.341210 | orchestrator | Tuesday 02 September 2025 00:30:44 +0000 (0:00:00.579) 0:07:00.974 ***** 2025-09-02 00:31:02.341221 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:02.341232 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:02.341243 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:02.341253 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:02.341264 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:02.341275 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:02.341286 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:02.341296 | orchestrator | 2025-09-02 00:31:02.341307 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-02 00:31:02.341318 | orchestrator | Tuesday 02 September 2025 00:30:46 +0000 (0:00:01.824) 0:07:02.799 ***** 2025-09-02 00:31:02.341329 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:31:02.341340 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:31:02.341350 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:31:02.341361 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:31:02.341372 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:31:02.341383 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:31:02.341393 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:31:02.341404 | orchestrator | 2025-09-02 00:31:02.341415 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-02 00:31:02.341426 | orchestrator | Tuesday 02 September 2025 00:30:47 +0000 (0:00:00.538) 0:07:03.337 ***** 2025-09-02 00:31:02.341454 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:02.341466 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:31:02.341476 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:31:02.341487 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:31:02.341498 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:31:02.341509 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:31:02.341519 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:31:02.341530 | orchestrator | 2025-09-02 00:31:02.341541 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-02 00:31:02.341552 | orchestrator | Tuesday 02 September 2025 00:30:54 +0000 (0:00:07.337) 0:07:10.675 ***** 2025-09-02 00:31:02.341563 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:02.341574 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:31:02.341584 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:31:02.341595 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:31:02.341606 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:31:02.341617 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:31:02.341627 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:31:02.341638 | orchestrator | 2025-09-02 00:31:02.341649 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-02 00:31:02.341660 | orchestrator | Tuesday 02 September 2025 00:30:55 +0000 (0:00:01.334) 0:07:12.010 ***** 2025-09-02 00:31:02.341671 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:02.341682 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:31:02.341692 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:31:02.341703 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:31:02.341719 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:31:02.341730 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:31:02.341741 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:31:02.341752 | orchestrator | 2025-09-02 00:31:02.341763 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-02 00:31:02.341773 | orchestrator | Tuesday 02 September 2025 00:30:57 +0000 (0:00:01.771) 0:07:13.782 ***** 2025-09-02 00:31:02.341784 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:02.341803 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:31:02.341815 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:31:02.341825 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:31:02.341836 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:31:02.341847 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:31:02.341858 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:31:02.341868 | orchestrator | 2025-09-02 00:31:02.341879 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-02 00:31:02.341890 | orchestrator | Tuesday 02 September 2025 00:30:59 +0000 (0:00:01.973) 0:07:15.755 ***** 2025-09-02 00:31:02.341901 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:02.341912 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:02.341922 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:02.341933 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:02.341944 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:02.341955 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:02.341966 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:02.341977 | orchestrator | 2025-09-02 00:31:02.341988 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-02 00:31:02.341998 | orchestrator | Tuesday 02 September 2025 00:31:00 +0000 (0:00:00.879) 0:07:16.635 ***** 2025-09-02 00:31:02.342009 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:31:02.342068 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:31:02.342079 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:31:02.342090 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:31:02.342101 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:31:02.342111 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:31:02.342122 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:31:02.342133 | orchestrator | 2025-09-02 00:31:02.342143 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-02 00:31:02.342154 | orchestrator | Tuesday 02 September 2025 00:31:01 +0000 (0:00:01.153) 0:07:17.788 ***** 2025-09-02 00:31:02.342165 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:31:02.342176 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:31:02.342186 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:31:02.342197 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:31:02.342208 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:31:02.342219 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:31:02.342229 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:31:02.342240 | orchestrator | 2025-09-02 00:31:02.342258 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-02 00:31:36.093597 | orchestrator | Tuesday 02 September 2025 00:31:02 +0000 (0:00:00.552) 0:07:18.341 ***** 2025-09-02 00:31:36.093722 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:36.093740 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.093751 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.093763 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.093774 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.093786 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.093797 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.093808 | orchestrator | 2025-09-02 00:31:36.093820 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-02 00:31:36.093832 | orchestrator | Tuesday 02 September 2025 00:31:02 +0000 (0:00:00.611) 0:07:18.952 ***** 2025-09-02 00:31:36.093843 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:36.093854 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.093865 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.093876 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.093886 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.093897 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.093908 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.093919 | orchestrator | 2025-09-02 00:31:36.093930 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-02 00:31:36.093941 | orchestrator | Tuesday 02 September 2025 00:31:03 +0000 (0:00:00.561) 0:07:19.514 ***** 2025-09-02 00:31:36.093975 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:36.093987 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.093997 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.094008 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.094079 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.094093 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.094104 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.094118 | orchestrator | 2025-09-02 00:31:36.094130 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-02 00:31:36.094144 | orchestrator | Tuesday 02 September 2025 00:31:04 +0000 (0:00:00.552) 0:07:20.067 ***** 2025-09-02 00:31:36.094158 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:36.094170 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.094184 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.094197 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.094210 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.094222 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.094234 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.094247 | orchestrator | 2025-09-02 00:31:36.094260 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-02 00:31:36.094273 | orchestrator | Tuesday 02 September 2025 00:31:09 +0000 (0:00:05.897) 0:07:25.965 ***** 2025-09-02 00:31:36.094286 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:31:36.094300 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:31:36.094313 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:31:36.094325 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:31:36.094338 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:31:36.094350 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:31:36.094363 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:31:36.094376 | orchestrator | 2025-09-02 00:31:36.094389 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-02 00:31:36.094428 | orchestrator | Tuesday 02 September 2025 00:31:10 +0000 (0:00:00.580) 0:07:26.545 ***** 2025-09-02 00:31:36.094460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:31:36.094477 | orchestrator | 2025-09-02 00:31:36.094490 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-02 00:31:36.094501 | orchestrator | Tuesday 02 September 2025 00:31:11 +0000 (0:00:00.872) 0:07:27.417 ***** 2025-09-02 00:31:36.094512 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.094523 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.094533 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:36.094545 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.094555 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.094566 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.094577 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.094587 | orchestrator | 2025-09-02 00:31:36.094598 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-02 00:31:36.094609 | orchestrator | Tuesday 02 September 2025 00:31:13 +0000 (0:00:02.092) 0:07:29.510 ***** 2025-09-02 00:31:36.094620 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:36.094630 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.094641 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.094652 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.094662 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.094673 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.094683 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.094694 | orchestrator | 2025-09-02 00:31:36.094705 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-02 00:31:36.094716 | orchestrator | Tuesday 02 September 2025 00:31:14 +0000 (0:00:01.146) 0:07:30.657 ***** 2025-09-02 00:31:36.094726 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:36.094737 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.094755 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.094765 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.094776 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.094787 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.094797 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.094808 | orchestrator | 2025-09-02 00:31:36.094819 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-02 00:31:36.094829 | orchestrator | Tuesday 02 September 2025 00:31:15 +0000 (0:00:00.933) 0:07:31.590 ***** 2025-09-02 00:31:36.094840 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-02 00:31:36.094853 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-02 00:31:36.094864 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-02 00:31:36.094892 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-02 00:31:36.094904 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-02 00:31:36.094915 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-02 00:31:36.094926 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-02 00:31:36.094937 | orchestrator | 2025-09-02 00:31:36.094948 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-02 00:31:36.094959 | orchestrator | Tuesday 02 September 2025 00:31:17 +0000 (0:00:01.733) 0:07:33.323 ***** 2025-09-02 00:31:36.094970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:31:36.094981 | orchestrator | 2025-09-02 00:31:36.094992 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-02 00:31:36.095003 | orchestrator | Tuesday 02 September 2025 00:31:18 +0000 (0:00:01.136) 0:07:34.460 ***** 2025-09-02 00:31:36.095014 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:31:36.095024 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:31:36.095035 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:31:36.095046 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:31:36.095057 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:31:36.095067 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:31:36.095078 | orchestrator | changed: [testbed-manager] 2025-09-02 00:31:36.095088 | orchestrator | 2025-09-02 00:31:36.095099 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-02 00:31:36.095110 | orchestrator | Tuesday 02 September 2025 00:31:27 +0000 (0:00:09.264) 0:07:43.724 ***** 2025-09-02 00:31:36.095121 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:36.095131 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.095142 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.095153 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.095163 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.095174 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.095184 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.095195 | orchestrator | 2025-09-02 00:31:36.095206 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-02 00:31:36.095217 | orchestrator | Tuesday 02 September 2025 00:31:29 +0000 (0:00:01.994) 0:07:45.719 ***** 2025-09-02 00:31:36.095227 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.095245 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.095255 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.095266 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.095276 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.095287 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.095297 | orchestrator | 2025-09-02 00:31:36.095313 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-02 00:31:36.095324 | orchestrator | Tuesday 02 September 2025 00:31:31 +0000 (0:00:01.341) 0:07:47.060 ***** 2025-09-02 00:31:36.095335 | orchestrator | changed: [testbed-manager] 2025-09-02 00:31:36.095346 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:31:36.095356 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:31:36.095367 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:31:36.095378 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:31:36.095388 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:31:36.095420 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:31:36.095434 | orchestrator | 2025-09-02 00:31:36.095445 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-02 00:31:36.095456 | orchestrator | 2025-09-02 00:31:36.095467 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-02 00:31:36.095477 | orchestrator | Tuesday 02 September 2025 00:31:32 +0000 (0:00:01.298) 0:07:48.359 ***** 2025-09-02 00:31:36.095488 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:31:36.095499 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:31:36.095510 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:31:36.095521 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:31:36.095532 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:31:36.095542 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:31:36.095553 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:31:36.095564 | orchestrator | 2025-09-02 00:31:36.095574 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-02 00:31:36.095585 | orchestrator | 2025-09-02 00:31:36.095596 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-02 00:31:36.095607 | orchestrator | Tuesday 02 September 2025 00:31:32 +0000 (0:00:00.529) 0:07:48.889 ***** 2025-09-02 00:31:36.095618 | orchestrator | changed: [testbed-manager] 2025-09-02 00:31:36.095628 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:31:36.095639 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:31:36.095649 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:31:36.095660 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:31:36.095671 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:31:36.095681 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:31:36.095692 | orchestrator | 2025-09-02 00:31:36.095703 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-02 00:31:36.095714 | orchestrator | Tuesday 02 September 2025 00:31:34 +0000 (0:00:01.362) 0:07:50.251 ***** 2025-09-02 00:31:36.095725 | orchestrator | ok: [testbed-manager] 2025-09-02 00:31:36.095736 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:31:36.095746 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:31:36.095757 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:31:36.095768 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:31:36.095778 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:31:36.095789 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:31:36.095800 | orchestrator | 2025-09-02 00:31:36.095811 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-02 00:31:36.095828 | orchestrator | Tuesday 02 September 2025 00:31:36 +0000 (0:00:01.843) 0:07:52.095 ***** 2025-09-02 00:32:00.328800 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:32:00.328924 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:32:00.328940 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:32:00.328951 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:32:00.328962 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:32:00.328973 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:32:00.328984 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:32:00.329018 | orchestrator | 2025-09-02 00:32:00.329031 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-02 00:32:00.329044 | orchestrator | Tuesday 02 September 2025 00:31:36 +0000 (0:00:00.549) 0:07:52.644 ***** 2025-09-02 00:32:00.329055 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:32:00.329067 | orchestrator | 2025-09-02 00:32:00.329078 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-02 00:32:00.329089 | orchestrator | Tuesday 02 September 2025 00:31:37 +0000 (0:00:01.028) 0:07:53.673 ***** 2025-09-02 00:32:00.329100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:32:00.329114 | orchestrator | 2025-09-02 00:32:00.329124 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-02 00:32:00.329135 | orchestrator | Tuesday 02 September 2025 00:31:38 +0000 (0:00:00.829) 0:07:54.502 ***** 2025-09-02 00:32:00.329146 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:00.329156 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:00.329167 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:00.329178 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:00.329188 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:00.329199 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:00.329209 | orchestrator | changed: [testbed-manager] 2025-09-02 00:32:00.329219 | orchestrator | 2025-09-02 00:32:00.329230 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-02 00:32:00.329241 | orchestrator | Tuesday 02 September 2025 00:31:47 +0000 (0:00:08.547) 0:08:03.050 ***** 2025-09-02 00:32:00.329251 | orchestrator | changed: [testbed-manager] 2025-09-02 00:32:00.329262 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:00.329272 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:00.329283 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:00.329293 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:00.329303 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:00.329314 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:00.329324 | orchestrator | 2025-09-02 00:32:00.329338 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-02 00:32:00.329350 | orchestrator | Tuesday 02 September 2025 00:31:47 +0000 (0:00:00.923) 0:08:03.973 ***** 2025-09-02 00:32:00.329363 | orchestrator | changed: [testbed-manager] 2025-09-02 00:32:00.329400 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:00.329413 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:00.329426 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:00.329439 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:00.329452 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:00.329464 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:00.329476 | orchestrator | 2025-09-02 00:32:00.329490 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-02 00:32:00.329502 | orchestrator | Tuesday 02 September 2025 00:31:49 +0000 (0:00:01.571) 0:08:05.545 ***** 2025-09-02 00:32:00.329515 | orchestrator | changed: [testbed-manager] 2025-09-02 00:32:00.329527 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:00.329589 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:00.329604 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:00.329616 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:00.329629 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:00.329643 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:00.329655 | orchestrator | 2025-09-02 00:32:00.329668 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-02 00:32:00.329683 | orchestrator | Tuesday 02 September 2025 00:31:51 +0000 (0:00:01.789) 0:08:07.334 ***** 2025-09-02 00:32:00.329704 | orchestrator | changed: [testbed-manager] 2025-09-02 00:32:00.329715 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:00.329725 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:00.329735 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:00.329746 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:00.329757 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:00.329767 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:00.329778 | orchestrator | 2025-09-02 00:32:00.329789 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-02 00:32:00.329799 | orchestrator | Tuesday 02 September 2025 00:31:52 +0000 (0:00:01.243) 0:08:08.577 ***** 2025-09-02 00:32:00.329810 | orchestrator | changed: [testbed-manager] 2025-09-02 00:32:00.329821 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:00.329831 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:00.329842 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:00.329852 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:00.329863 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:00.329873 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:00.329884 | orchestrator | 2025-09-02 00:32:00.329895 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-02 00:32:00.329905 | orchestrator | 2025-09-02 00:32:00.329916 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-02 00:32:00.329927 | orchestrator | Tuesday 02 September 2025 00:31:53 +0000 (0:00:01.433) 0:08:10.011 ***** 2025-09-02 00:32:00.329937 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:32:00.329948 | orchestrator | 2025-09-02 00:32:00.329959 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-02 00:32:00.329986 | orchestrator | Tuesday 02 September 2025 00:31:54 +0000 (0:00:00.853) 0:08:10.865 ***** 2025-09-02 00:32:00.329997 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:00.330009 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:32:00.330086 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:32:00.330100 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:32:00.330111 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:32:00.330122 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:32:00.330133 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:32:00.330144 | orchestrator | 2025-09-02 00:32:00.330155 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-02 00:32:00.330166 | orchestrator | Tuesday 02 September 2025 00:31:55 +0000 (0:00:00.892) 0:08:11.757 ***** 2025-09-02 00:32:00.330177 | orchestrator | changed: [testbed-manager] 2025-09-02 00:32:00.330188 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:00.330199 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:00.330210 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:00.330221 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:00.330232 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:00.330242 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:00.330253 | orchestrator | 2025-09-02 00:32:00.330264 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-02 00:32:00.330275 | orchestrator | Tuesday 02 September 2025 00:31:57 +0000 (0:00:01.383) 0:08:13.141 ***** 2025-09-02 00:32:00.330287 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:32:00.330298 | orchestrator | 2025-09-02 00:32:00.330309 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-02 00:32:00.330320 | orchestrator | Tuesday 02 September 2025 00:31:58 +0000 (0:00:00.872) 0:08:14.014 ***** 2025-09-02 00:32:00.330331 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:00.330342 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:32:00.330353 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:32:00.330364 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:32:00.330393 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:32:00.330412 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:32:00.330423 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:32:00.330434 | orchestrator | 2025-09-02 00:32:00.330445 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-02 00:32:00.330456 | orchestrator | Tuesday 02 September 2025 00:31:58 +0000 (0:00:00.944) 0:08:14.959 ***** 2025-09-02 00:32:00.330467 | orchestrator | changed: [testbed-manager] 2025-09-02 00:32:00.330478 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:00.330489 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:00.330500 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:00.330511 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:00.330521 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:00.330532 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:00.330543 | orchestrator | 2025-09-02 00:32:00.330554 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:32:00.330566 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-02 00:32:00.330583 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-02 00:32:00.330595 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-02 00:32:00.330606 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-02 00:32:00.330617 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-02 00:32:00.330628 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-02 00:32:00.330639 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-02 00:32:00.330650 | orchestrator | 2025-09-02 00:32:00.330661 | orchestrator | 2025-09-02 00:32:00.330672 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:32:00.330684 | orchestrator | Tuesday 02 September 2025 00:32:00 +0000 (0:00:01.351) 0:08:16.311 ***** 2025-09-02 00:32:00.330695 | orchestrator | =============================================================================== 2025-09-02 00:32:00.330706 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.79s 2025-09-02 00:32:00.330717 | orchestrator | osism.commons.packages : Download required packages -------------------- 41.03s 2025-09-02 00:32:00.330727 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.51s 2025-09-02 00:32:00.330738 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.75s 2025-09-02 00:32:00.330749 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.21s 2025-09-02 00:32:00.330759 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.07s 2025-09-02 00:32:00.330771 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.15s 2025-09-02 00:32:00.330781 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.01s 2025-09-02 00:32:00.330792 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.61s 2025-09-02 00:32:00.330802 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.26s 2025-09-02 00:32:00.330821 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.60s 2025-09-02 00:32:00.809254 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.55s 2025-09-02 00:32:00.809350 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.33s 2025-09-02 00:32:00.809426 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.23s 2025-09-02 00:32:00.809440 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.87s 2025-09-02 00:32:00.809451 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.34s 2025-09-02 00:32:00.809462 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.18s 2025-09-02 00:32:00.809474 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.24s 2025-09-02 00:32:00.809485 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.90s 2025-09-02 00:32:00.809496 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.82s 2025-09-02 00:32:01.147312 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-02 00:32:01.147428 | orchestrator | + osism apply network 2025-09-02 00:32:13.855869 | orchestrator | 2025-09-02 00:32:13 | INFO  | Task 96ee1f11-5d5b-427b-b8d6-73cd780213e6 (network) was prepared for execution. 2025-09-02 00:32:13.855971 | orchestrator | 2025-09-02 00:32:13 | INFO  | It takes a moment until task 96ee1f11-5d5b-427b-b8d6-73cd780213e6 (network) has been started and output is visible here. 2025-09-02 00:32:42.547465 | orchestrator | 2025-09-02 00:32:42.547587 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-02 00:32:42.547603 | orchestrator | 2025-09-02 00:32:42.547615 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-02 00:32:42.547627 | orchestrator | Tuesday 02 September 2025 00:32:18 +0000 (0:00:00.271) 0:00:00.272 ***** 2025-09-02 00:32:42.547638 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:42.547651 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:32:42.547663 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:32:42.547674 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:32:42.547685 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:32:42.547695 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:32:42.547706 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:32:42.547717 | orchestrator | 2025-09-02 00:32:42.547728 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-02 00:32:42.547739 | orchestrator | Tuesday 02 September 2025 00:32:18 +0000 (0:00:00.709) 0:00:00.981 ***** 2025-09-02 00:32:42.547752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:32:42.547766 | orchestrator | 2025-09-02 00:32:42.547777 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-02 00:32:42.547788 | orchestrator | Tuesday 02 September 2025 00:32:20 +0000 (0:00:01.256) 0:00:02.238 ***** 2025-09-02 00:32:42.547799 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:32:42.547810 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:32:42.547821 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:42.547832 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:32:42.547842 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:32:42.547853 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:32:42.547864 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:32:42.547875 | orchestrator | 2025-09-02 00:32:42.547885 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-02 00:32:42.547896 | orchestrator | Tuesday 02 September 2025 00:32:22 +0000 (0:00:01.943) 0:00:04.181 ***** 2025-09-02 00:32:42.547907 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:42.547921 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:32:42.547934 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:32:42.547946 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:32:42.547959 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:32:42.547971 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:32:42.547983 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:32:42.547996 | orchestrator | 2025-09-02 00:32:42.548008 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-02 00:32:42.548046 | orchestrator | Tuesday 02 September 2025 00:32:23 +0000 (0:00:01.725) 0:00:05.906 ***** 2025-09-02 00:32:42.548060 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-02 00:32:42.548073 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-02 00:32:42.548086 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-02 00:32:42.548098 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-02 00:32:42.548111 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-02 00:32:42.548124 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-02 00:32:42.548137 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-02 00:32:42.548149 | orchestrator | 2025-09-02 00:32:42.548162 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-02 00:32:42.548175 | orchestrator | Tuesday 02 September 2025 00:32:24 +0000 (0:00:01.013) 0:00:06.920 ***** 2025-09-02 00:32:42.548187 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 00:32:42.548200 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-02 00:32:42.548213 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-02 00:32:42.548226 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 00:32:42.548239 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-02 00:32:42.548252 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-02 00:32:42.548264 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-02 00:32:42.548275 | orchestrator | 2025-09-02 00:32:42.548286 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-02 00:32:42.548297 | orchestrator | Tuesday 02 September 2025 00:32:28 +0000 (0:00:03.310) 0:00:10.231 ***** 2025-09-02 00:32:42.548308 | orchestrator | changed: [testbed-manager] 2025-09-02 00:32:42.548319 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:42.548368 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:42.548388 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:42.548401 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:42.548412 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:42.548422 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:42.548433 | orchestrator | 2025-09-02 00:32:42.548444 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-02 00:32:42.548454 | orchestrator | Tuesday 02 September 2025 00:32:29 +0000 (0:00:01.464) 0:00:11.695 ***** 2025-09-02 00:32:42.548465 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 00:32:42.548476 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 00:32:42.548487 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-02 00:32:42.548497 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-02 00:32:42.548508 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-02 00:32:42.548518 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-02 00:32:42.548529 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-02 00:32:42.548540 | orchestrator | 2025-09-02 00:32:42.548550 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-02 00:32:42.548561 | orchestrator | Tuesday 02 September 2025 00:32:31 +0000 (0:00:01.970) 0:00:13.665 ***** 2025-09-02 00:32:42.548572 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:42.548582 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:32:42.548593 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:32:42.548604 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:32:42.548614 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:32:42.548625 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:32:42.548635 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:32:42.548646 | orchestrator | 2025-09-02 00:32:42.548657 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-02 00:32:42.548686 | orchestrator | Tuesday 02 September 2025 00:32:32 +0000 (0:00:01.142) 0:00:14.808 ***** 2025-09-02 00:32:42.548697 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:32:42.548708 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:32:42.548719 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:32:42.548740 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:32:42.548751 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:32:42.548762 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:32:42.548773 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:32:42.548784 | orchestrator | 2025-09-02 00:32:42.548795 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-02 00:32:42.548806 | orchestrator | Tuesday 02 September 2025 00:32:33 +0000 (0:00:00.695) 0:00:15.503 ***** 2025-09-02 00:32:42.548817 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:42.548828 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:32:42.548839 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:32:42.548850 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:32:42.548861 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:32:42.548872 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:32:42.548883 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:32:42.548894 | orchestrator | 2025-09-02 00:32:42.548905 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-02 00:32:42.548931 | orchestrator | Tuesday 02 September 2025 00:32:35 +0000 (0:00:02.121) 0:00:17.625 ***** 2025-09-02 00:32:42.548953 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:32:42.548964 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:32:42.548975 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:32:42.548986 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:32:42.548997 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:32:42.549024 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:32:42.549036 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-02 00:32:42.549048 | orchestrator | 2025-09-02 00:32:42.549058 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-02 00:32:42.549069 | orchestrator | Tuesday 02 September 2025 00:32:36 +0000 (0:00:00.945) 0:00:18.571 ***** 2025-09-02 00:32:42.549080 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:42.549091 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:32:42.549101 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:32:42.549112 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:32:42.549123 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:32:42.549133 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:32:42.549144 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:32:42.549155 | orchestrator | 2025-09-02 00:32:42.549166 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-02 00:32:42.549177 | orchestrator | Tuesday 02 September 2025 00:32:38 +0000 (0:00:01.627) 0:00:20.198 ***** 2025-09-02 00:32:42.549188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:32:42.549201 | orchestrator | 2025-09-02 00:32:42.549212 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-02 00:32:42.549222 | orchestrator | Tuesday 02 September 2025 00:32:39 +0000 (0:00:01.303) 0:00:21.502 ***** 2025-09-02 00:32:42.549233 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:32:42.549244 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:42.549255 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:32:42.549266 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:32:42.549277 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:32:42.549287 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:32:42.549298 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:32:42.549309 | orchestrator | 2025-09-02 00:32:42.549320 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-02 00:32:42.549356 | orchestrator | Tuesday 02 September 2025 00:32:40 +0000 (0:00:00.961) 0:00:22.464 ***** 2025-09-02 00:32:42.549367 | orchestrator | ok: [testbed-manager] 2025-09-02 00:32:42.549378 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:32:42.549397 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:32:42.549408 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:32:42.549419 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:32:42.549429 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:32:42.549440 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:32:42.549451 | orchestrator | 2025-09-02 00:32:42.549462 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-02 00:32:42.549473 | orchestrator | Tuesday 02 September 2025 00:32:41 +0000 (0:00:00.868) 0:00:23.332 ***** 2025-09-02 00:32:42.549484 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-02 00:32:42.549495 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-02 00:32:42.549506 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-02 00:32:42.549516 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-02 00:32:42.549527 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-02 00:32:42.549538 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-02 00:32:42.549549 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-02 00:32:42.549560 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-02 00:32:42.549571 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-02 00:32:42.549581 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-02 00:32:42.549592 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-02 00:32:42.549603 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-02 00:32:42.549613 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-02 00:32:42.549625 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-02 00:32:42.549635 | orchestrator | 2025-09-02 00:32:42.549654 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-02 00:32:59.945038 | orchestrator | Tuesday 02 September 2025 00:32:42 +0000 (0:00:01.256) 0:00:24.589 ***** 2025-09-02 00:32:59.945156 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:32:59.945170 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:32:59.945180 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:32:59.945189 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:32:59.945198 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:32:59.945207 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:32:59.945216 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:32:59.945225 | orchestrator | 2025-09-02 00:32:59.945235 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-02 00:32:59.945244 | orchestrator | Tuesday 02 September 2025 00:32:43 +0000 (0:00:00.648) 0:00:25.238 ***** 2025-09-02 00:32:59.945254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-2, testbed-node-0, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:32:59.945266 | orchestrator | 2025-09-02 00:32:59.945275 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-02 00:32:59.945284 | orchestrator | Tuesday 02 September 2025 00:32:48 +0000 (0:00:04.941) 0:00:30.179 ***** 2025-09-02 00:32:59.945362 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945418 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945537 | orchestrator | 2025-09-02 00:32:59.945546 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-02 00:32:59.945555 | orchestrator | Tuesday 02 September 2025 00:32:54 +0000 (0:00:05.984) 0:00:36.164 ***** 2025-09-02 00:32:59.945564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945593 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-02 00:32:59.945667 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:32:59.945709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:33:06.461778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-02 00:33:06.461901 | orchestrator | 2025-09-02 00:33:06.461940 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-02 00:33:06.461955 | orchestrator | Tuesday 02 September 2025 00:32:59 +0000 (0:00:05.821) 0:00:41.985 ***** 2025-09-02 00:33:06.461990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:33:06.462003 | orchestrator | 2025-09-02 00:33:06.462014 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-02 00:33:06.462080 | orchestrator | Tuesday 02 September 2025 00:33:01 +0000 (0:00:01.360) 0:00:43.346 ***** 2025-09-02 00:33:06.462098 | orchestrator | ok: [testbed-manager] 2025-09-02 00:33:06.462110 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:33:06.462121 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:33:06.462132 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:33:06.462142 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:33:06.462154 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:33:06.462164 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:33:06.462175 | orchestrator | 2025-09-02 00:33:06.462186 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-02 00:33:06.462197 | orchestrator | Tuesday 02 September 2025 00:33:02 +0000 (0:00:01.187) 0:00:44.533 ***** 2025-09-02 00:33:06.462208 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-02 00:33:06.462220 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-02 00:33:06.462232 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-02 00:33:06.462242 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-02 00:33:06.462253 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:33:06.462265 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-02 00:33:06.462276 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-02 00:33:06.462286 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-02 00:33:06.462324 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-02 00:33:06.462335 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:33:06.462346 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-02 00:33:06.462357 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-02 00:33:06.462368 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-02 00:33:06.462378 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-02 00:33:06.462389 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:33:06.462400 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-02 00:33:06.462411 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-02 00:33:06.462422 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-02 00:33:06.462432 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-02 00:33:06.462443 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:33:06.462454 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-02 00:33:06.462464 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-02 00:33:06.462475 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-02 00:33:06.462486 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-02 00:33:06.462496 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:33:06.462507 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-02 00:33:06.462518 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-02 00:33:06.462539 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-02 00:33:06.462550 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-02 00:33:06.462561 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:33:06.462572 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-02 00:33:06.462583 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-02 00:33:06.462593 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-02 00:33:06.462604 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-02 00:33:06.462615 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:33:06.462632 | orchestrator | 2025-09-02 00:33:06.462652 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-02 00:33:06.462693 | orchestrator | Tuesday 02 September 2025 00:33:04 +0000 (0:00:02.110) 0:00:46.643 ***** 2025-09-02 00:33:06.462714 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:33:06.462734 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:33:06.462753 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:33:06.462774 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:33:06.462794 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:33:06.462814 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:33:06.462828 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:33:06.462838 | orchestrator | 2025-09-02 00:33:06.462849 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-02 00:33:06.462860 | orchestrator | Tuesday 02 September 2025 00:33:05 +0000 (0:00:00.693) 0:00:47.337 ***** 2025-09-02 00:33:06.462871 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:33:06.462881 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:33:06.462892 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:33:06.462903 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:33:06.462913 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:33:06.462924 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:33:06.462935 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:33:06.462945 | orchestrator | 2025-09-02 00:33:06.462956 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:33:06.462974 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-02 00:33:06.462988 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:33:06.462999 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:33:06.463010 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:33:06.463021 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:33:06.463032 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:33:06.463043 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 00:33:06.463053 | orchestrator | 2025-09-02 00:33:06.463064 | orchestrator | 2025-09-02 00:33:06.463075 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:33:06.463086 | orchestrator | Tuesday 02 September 2025 00:33:06 +0000 (0:00:00.744) 0:00:48.081 ***** 2025-09-02 00:33:06.463106 | orchestrator | =============================================================================== 2025-09-02 00:33:06.463117 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.98s 2025-09-02 00:33:06.463128 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.82s 2025-09-02 00:33:06.463139 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.94s 2025-09-02 00:33:06.463150 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.31s 2025-09-02 00:33:06.463160 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2025-09-02 00:33:06.463171 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.11s 2025-09-02 00:33:06.463182 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.97s 2025-09-02 00:33:06.463193 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.94s 2025-09-02 00:33:06.463204 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.73s 2025-09-02 00:33:06.463214 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.63s 2025-09-02 00:33:06.463225 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.46s 2025-09-02 00:33:06.463236 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.36s 2025-09-02 00:33:06.463247 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2025-09-02 00:33:06.463258 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.26s 2025-09-02 00:33:06.463269 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2025-09-02 00:33:06.463279 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2025-09-02 00:33:06.463290 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2025-09-02 00:33:06.463322 | orchestrator | osism.commons.network : Create required directories --------------------- 1.01s 2025-09-02 00:33:06.463333 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.96s 2025-09-02 00:33:06.463344 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2025-09-02 00:33:06.794582 | orchestrator | + osism apply wireguard 2025-09-02 00:33:18.830627 | orchestrator | 2025-09-02 00:33:18 | INFO  | Task e90eadf1-34c5-4da6-bc86-aef5a8616358 (wireguard) was prepared for execution. 2025-09-02 00:33:18.830724 | orchestrator | 2025-09-02 00:33:18 | INFO  | It takes a moment until task e90eadf1-34c5-4da6-bc86-aef5a8616358 (wireguard) has been started and output is visible here. 2025-09-02 00:33:39.100879 | orchestrator | 2025-09-02 00:33:39.101009 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-02 00:33:39.101028 | orchestrator | 2025-09-02 00:33:39.101041 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-02 00:33:39.101053 | orchestrator | Tuesday 02 September 2025 00:33:22 +0000 (0:00:00.235) 0:00:00.235 ***** 2025-09-02 00:33:39.101065 | orchestrator | ok: [testbed-manager] 2025-09-02 00:33:39.101077 | orchestrator | 2025-09-02 00:33:39.101089 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-02 00:33:39.101100 | orchestrator | Tuesday 02 September 2025 00:33:24 +0000 (0:00:01.623) 0:00:01.858 ***** 2025-09-02 00:33:39.101111 | orchestrator | changed: [testbed-manager] 2025-09-02 00:33:39.101123 | orchestrator | 2025-09-02 00:33:39.101134 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-02 00:33:39.101146 | orchestrator | Tuesday 02 September 2025 00:33:31 +0000 (0:00:06.754) 0:00:08.613 ***** 2025-09-02 00:33:39.101157 | orchestrator | changed: [testbed-manager] 2025-09-02 00:33:39.101167 | orchestrator | 2025-09-02 00:33:39.101178 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-02 00:33:39.101209 | orchestrator | Tuesday 02 September 2025 00:33:31 +0000 (0:00:00.564) 0:00:09.177 ***** 2025-09-02 00:33:39.101220 | orchestrator | changed: [testbed-manager] 2025-09-02 00:33:39.101310 | orchestrator | 2025-09-02 00:33:39.101325 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-02 00:33:39.101337 | orchestrator | Tuesday 02 September 2025 00:33:32 +0000 (0:00:00.427) 0:00:09.604 ***** 2025-09-02 00:33:39.101348 | orchestrator | ok: [testbed-manager] 2025-09-02 00:33:39.101359 | orchestrator | 2025-09-02 00:33:39.101370 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-02 00:33:39.101381 | orchestrator | Tuesday 02 September 2025 00:33:32 +0000 (0:00:00.521) 0:00:10.126 ***** 2025-09-02 00:33:39.101392 | orchestrator | ok: [testbed-manager] 2025-09-02 00:33:39.101403 | orchestrator | 2025-09-02 00:33:39.101416 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-02 00:33:39.101429 | orchestrator | Tuesday 02 September 2025 00:33:33 +0000 (0:00:00.561) 0:00:10.687 ***** 2025-09-02 00:33:39.101442 | orchestrator | ok: [testbed-manager] 2025-09-02 00:33:39.101455 | orchestrator | 2025-09-02 00:33:39.101468 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-02 00:33:39.101481 | orchestrator | Tuesday 02 September 2025 00:33:33 +0000 (0:00:00.447) 0:00:11.135 ***** 2025-09-02 00:33:39.101494 | orchestrator | changed: [testbed-manager] 2025-09-02 00:33:39.101507 | orchestrator | 2025-09-02 00:33:39.101519 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-02 00:33:39.101532 | orchestrator | Tuesday 02 September 2025 00:33:35 +0000 (0:00:01.192) 0:00:12.328 ***** 2025-09-02 00:33:39.101544 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-02 00:33:39.101557 | orchestrator | changed: [testbed-manager] 2025-09-02 00:33:39.101570 | orchestrator | 2025-09-02 00:33:39.101583 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-02 00:33:39.101595 | orchestrator | Tuesday 02 September 2025 00:33:36 +0000 (0:00:00.949) 0:00:13.277 ***** 2025-09-02 00:33:39.101608 | orchestrator | changed: [testbed-manager] 2025-09-02 00:33:39.101621 | orchestrator | 2025-09-02 00:33:39.101634 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-02 00:33:39.101647 | orchestrator | Tuesday 02 September 2025 00:33:37 +0000 (0:00:01.746) 0:00:15.024 ***** 2025-09-02 00:33:39.101660 | orchestrator | changed: [testbed-manager] 2025-09-02 00:33:39.101673 | orchestrator | 2025-09-02 00:33:39.101685 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:33:39.101699 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:33:39.101712 | orchestrator | 2025-09-02 00:33:39.101731 | orchestrator | 2025-09-02 00:33:39.101750 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:33:39.101769 | orchestrator | Tuesday 02 September 2025 00:33:38 +0000 (0:00:00.955) 0:00:15.979 ***** 2025-09-02 00:33:39.101787 | orchestrator | =============================================================================== 2025-09-02 00:33:39.101805 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.75s 2025-09-02 00:33:39.101823 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.75s 2025-09-02 00:33:39.101840 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.62s 2025-09-02 00:33:39.101859 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2025-09-02 00:33:39.101879 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-09-02 00:33:39.101897 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2025-09-02 00:33:39.101916 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-02 00:33:39.101927 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.56s 2025-09-02 00:33:39.101938 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-09-02 00:33:39.101949 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2025-09-02 00:33:39.101971 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-09-02 00:33:39.410981 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-02 00:33:39.449699 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-02 00:33:39.449755 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-02 00:33:39.522654 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 205 0 --:--:-- --:--:-- --:--:-- 205 100 15 100 15 0 0 205 0 --:--:-- --:--:-- --:--:-- 205 2025-09-02 00:33:39.538641 | orchestrator | + osism apply --environment custom workarounds 2025-09-02 00:33:41.530215 | orchestrator | 2025-09-02 00:33:41 | INFO  | Trying to run play workarounds in environment custom 2025-09-02 00:33:51.786177 | orchestrator | 2025-09-02 00:33:51 | INFO  | Task b7bbd90a-8d16-4477-8f80-32408f7f9c29 (workarounds) was prepared for execution. 2025-09-02 00:33:51.786358 | orchestrator | 2025-09-02 00:33:51 | INFO  | It takes a moment until task b7bbd90a-8d16-4477-8f80-32408f7f9c29 (workarounds) has been started and output is visible here. 2025-09-02 00:34:17.222682 | orchestrator | 2025-09-02 00:34:17.222810 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:34:17.222828 | orchestrator | 2025-09-02 00:34:17.222840 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-02 00:34:17.222853 | orchestrator | Tuesday 02 September 2025 00:33:55 +0000 (0:00:00.154) 0:00:00.154 ***** 2025-09-02 00:34:17.222886 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-02 00:34:17.222898 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-02 00:34:17.222910 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-02 00:34:17.222921 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-02 00:34:17.222932 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-02 00:34:17.222943 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-02 00:34:17.222954 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-02 00:34:17.222965 | orchestrator | 2025-09-02 00:34:17.222976 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-02 00:34:17.222987 | orchestrator | 2025-09-02 00:34:17.222998 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-02 00:34:17.223009 | orchestrator | Tuesday 02 September 2025 00:33:56 +0000 (0:00:00.852) 0:00:01.006 ***** 2025-09-02 00:34:17.223020 | orchestrator | ok: [testbed-manager] 2025-09-02 00:34:17.223033 | orchestrator | 2025-09-02 00:34:17.223044 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-02 00:34:17.223055 | orchestrator | 2025-09-02 00:34:17.223065 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-02 00:34:17.223076 | orchestrator | Tuesday 02 September 2025 00:33:59 +0000 (0:00:02.639) 0:00:03.646 ***** 2025-09-02 00:34:17.223087 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:34:17.223098 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:34:17.223109 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:34:17.223120 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:34:17.223131 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:34:17.223142 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:34:17.223154 | orchestrator | 2025-09-02 00:34:17.223165 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-02 00:34:17.223176 | orchestrator | 2025-09-02 00:34:17.223187 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-02 00:34:17.223198 | orchestrator | Tuesday 02 September 2025 00:34:01 +0000 (0:00:01.832) 0:00:05.479 ***** 2025-09-02 00:34:17.223241 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-02 00:34:17.223279 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-02 00:34:17.223292 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-02 00:34:17.223303 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-02 00:34:17.223314 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-02 00:34:17.223325 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-02 00:34:17.223336 | orchestrator | 2025-09-02 00:34:17.223347 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-02 00:34:17.223358 | orchestrator | Tuesday 02 September 2025 00:34:02 +0000 (0:00:01.524) 0:00:07.003 ***** 2025-09-02 00:34:17.223369 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:34:17.223380 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:34:17.223391 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:34:17.223402 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:34:17.223413 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:34:17.223424 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:34:17.223435 | orchestrator | 2025-09-02 00:34:17.223446 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-02 00:34:17.223457 | orchestrator | Tuesday 02 September 2025 00:34:06 +0000 (0:00:03.823) 0:00:10.827 ***** 2025-09-02 00:34:17.223468 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:34:17.223479 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:34:17.223490 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:34:17.223501 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:34:17.223512 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:34:17.223523 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:34:17.223534 | orchestrator | 2025-09-02 00:34:17.223545 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-02 00:34:17.223556 | orchestrator | 2025-09-02 00:34:17.223567 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-02 00:34:17.223578 | orchestrator | Tuesday 02 September 2025 00:34:07 +0000 (0:00:00.771) 0:00:11.598 ***** 2025-09-02 00:34:17.223589 | orchestrator | changed: [testbed-manager] 2025-09-02 00:34:17.223600 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:34:17.223611 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:34:17.223622 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:34:17.223633 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:34:17.223644 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:34:17.223655 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:34:17.223666 | orchestrator | 2025-09-02 00:34:17.223676 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-02 00:34:17.223688 | orchestrator | Tuesday 02 September 2025 00:34:09 +0000 (0:00:01.680) 0:00:13.279 ***** 2025-09-02 00:34:17.223699 | orchestrator | changed: [testbed-manager] 2025-09-02 00:34:17.223710 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:34:17.223720 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:34:17.223731 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:34:17.223742 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:34:17.223753 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:34:17.223785 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:34:17.223796 | orchestrator | 2025-09-02 00:34:17.223808 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-02 00:34:17.223819 | orchestrator | Tuesday 02 September 2025 00:34:10 +0000 (0:00:01.697) 0:00:14.976 ***** 2025-09-02 00:34:17.223830 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:34:17.223841 | orchestrator | ok: [testbed-manager] 2025-09-02 00:34:17.223852 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:34:17.223872 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:34:17.223889 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:34:17.223900 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:34:17.223911 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:34:17.223922 | orchestrator | 2025-09-02 00:34:17.223934 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-02 00:34:17.223945 | orchestrator | Tuesday 02 September 2025 00:34:12 +0000 (0:00:01.467) 0:00:16.443 ***** 2025-09-02 00:34:17.223956 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:34:17.223967 | orchestrator | changed: [testbed-manager] 2025-09-02 00:34:17.223978 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:34:17.223989 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:34:17.224000 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:34:17.224011 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:34:17.224022 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:34:17.224033 | orchestrator | 2025-09-02 00:34:17.224044 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-02 00:34:17.224056 | orchestrator | Tuesday 02 September 2025 00:34:13 +0000 (0:00:01.701) 0:00:18.145 ***** 2025-09-02 00:34:17.224067 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:34:17.224078 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:34:17.224089 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:34:17.224100 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:34:17.224111 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:34:17.224122 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:34:17.224132 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:34:17.224143 | orchestrator | 2025-09-02 00:34:17.224154 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-02 00:34:17.224166 | orchestrator | 2025-09-02 00:34:17.224177 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-02 00:34:17.224188 | orchestrator | Tuesday 02 September 2025 00:34:14 +0000 (0:00:00.622) 0:00:18.768 ***** 2025-09-02 00:34:17.224199 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:34:17.224245 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:34:17.224256 | orchestrator | ok: [testbed-manager] 2025-09-02 00:34:17.224267 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:34:17.224278 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:34:17.224289 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:34:17.224300 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:34:17.224310 | orchestrator | 2025-09-02 00:34:17.224321 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:34:17.224334 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:34:17.224346 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:17.224357 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:17.224368 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:17.224379 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:17.224389 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:17.224400 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:17.224411 | orchestrator | 2025-09-02 00:34:17.224422 | orchestrator | 2025-09-02 00:34:17.224434 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:34:17.224453 | orchestrator | Tuesday 02 September 2025 00:34:17 +0000 (0:00:02.697) 0:00:21.465 ***** 2025-09-02 00:34:17.224464 | orchestrator | =============================================================================== 2025-09-02 00:34:17.224474 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.82s 2025-09-02 00:34:17.224485 | orchestrator | Install python3-docker -------------------------------------------------- 2.70s 2025-09-02 00:34:17.224496 | orchestrator | Apply netplan configuration --------------------------------------------- 2.64s 2025-09-02 00:34:17.224507 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2025-09-02 00:34:17.224518 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.70s 2025-09-02 00:34:17.224529 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.70s 2025-09-02 00:34:17.224540 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.68s 2025-09-02 00:34:17.224550 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.52s 2025-09-02 00:34:17.224561 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.47s 2025-09-02 00:34:17.224572 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.85s 2025-09-02 00:34:17.224583 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2025-09-02 00:34:17.224601 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2025-09-02 00:34:17.858360 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-02 00:34:29.804512 | orchestrator | 2025-09-02 00:34:29 | INFO  | Task ee1030d0-2618-4265-814b-32e0eeec8153 (reboot) was prepared for execution. 2025-09-02 00:34:29.804649 | orchestrator | 2025-09-02 00:34:29 | INFO  | It takes a moment until task ee1030d0-2618-4265-814b-32e0eeec8153 (reboot) has been started and output is visible here. 2025-09-02 00:34:39.831549 | orchestrator | 2025-09-02 00:34:39.831654 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-02 00:34:39.831671 | orchestrator | 2025-09-02 00:34:39.831682 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-02 00:34:39.831694 | orchestrator | Tuesday 02 September 2025 00:34:33 +0000 (0:00:00.207) 0:00:00.207 ***** 2025-09-02 00:34:39.831705 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:34:39.831717 | orchestrator | 2025-09-02 00:34:39.831729 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-02 00:34:39.831740 | orchestrator | Tuesday 02 September 2025 00:34:33 +0000 (0:00:00.099) 0:00:00.307 ***** 2025-09-02 00:34:39.831750 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:34:39.831762 | orchestrator | 2025-09-02 00:34:39.831773 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-02 00:34:39.831784 | orchestrator | Tuesday 02 September 2025 00:34:34 +0000 (0:00:00.941) 0:00:01.249 ***** 2025-09-02 00:34:39.831795 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:34:39.831806 | orchestrator | 2025-09-02 00:34:39.831818 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-02 00:34:39.831829 | orchestrator | 2025-09-02 00:34:39.831840 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-02 00:34:39.831851 | orchestrator | Tuesday 02 September 2025 00:34:34 +0000 (0:00:00.117) 0:00:01.366 ***** 2025-09-02 00:34:39.831861 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:34:39.831872 | orchestrator | 2025-09-02 00:34:39.831883 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-02 00:34:39.831894 | orchestrator | Tuesday 02 September 2025 00:34:35 +0000 (0:00:00.101) 0:00:01.468 ***** 2025-09-02 00:34:39.831905 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:34:39.831916 | orchestrator | 2025-09-02 00:34:39.831926 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-02 00:34:39.831937 | orchestrator | Tuesday 02 September 2025 00:34:35 +0000 (0:00:00.637) 0:00:02.105 ***** 2025-09-02 00:34:39.831969 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:34:39.831980 | orchestrator | 2025-09-02 00:34:39.831991 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-02 00:34:39.832002 | orchestrator | 2025-09-02 00:34:39.832013 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-02 00:34:39.832023 | orchestrator | Tuesday 02 September 2025 00:34:35 +0000 (0:00:00.116) 0:00:02.222 ***** 2025-09-02 00:34:39.832034 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:34:39.832045 | orchestrator | 2025-09-02 00:34:39.832056 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-02 00:34:39.832066 | orchestrator | Tuesday 02 September 2025 00:34:36 +0000 (0:00:00.199) 0:00:02.421 ***** 2025-09-02 00:34:39.832077 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:34:39.832088 | orchestrator | 2025-09-02 00:34:39.832099 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-02 00:34:39.832110 | orchestrator | Tuesday 02 September 2025 00:34:36 +0000 (0:00:00.687) 0:00:03.109 ***** 2025-09-02 00:34:39.832120 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:34:39.832131 | orchestrator | 2025-09-02 00:34:39.832142 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-02 00:34:39.832153 | orchestrator | 2025-09-02 00:34:39.832164 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-02 00:34:39.832221 | orchestrator | Tuesday 02 September 2025 00:34:36 +0000 (0:00:00.121) 0:00:03.230 ***** 2025-09-02 00:34:39.832235 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:34:39.832246 | orchestrator | 2025-09-02 00:34:39.832257 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-02 00:34:39.832267 | orchestrator | Tuesday 02 September 2025 00:34:36 +0000 (0:00:00.111) 0:00:03.342 ***** 2025-09-02 00:34:39.832278 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:34:39.832289 | orchestrator | 2025-09-02 00:34:39.832300 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-02 00:34:39.832310 | orchestrator | Tuesday 02 September 2025 00:34:37 +0000 (0:00:00.653) 0:00:03.996 ***** 2025-09-02 00:34:39.832321 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:34:39.832332 | orchestrator | 2025-09-02 00:34:39.832342 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-02 00:34:39.832353 | orchestrator | 2025-09-02 00:34:39.832364 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-02 00:34:39.832375 | orchestrator | Tuesday 02 September 2025 00:34:37 +0000 (0:00:00.125) 0:00:04.121 ***** 2025-09-02 00:34:39.832385 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:34:39.832396 | orchestrator | 2025-09-02 00:34:39.832406 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-02 00:34:39.832417 | orchestrator | Tuesday 02 September 2025 00:34:37 +0000 (0:00:00.104) 0:00:04.226 ***** 2025-09-02 00:34:39.832427 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:34:39.832438 | orchestrator | 2025-09-02 00:34:39.832449 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-02 00:34:39.832460 | orchestrator | Tuesday 02 September 2025 00:34:38 +0000 (0:00:00.721) 0:00:04.948 ***** 2025-09-02 00:34:39.832470 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:34:39.832481 | orchestrator | 2025-09-02 00:34:39.832491 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-02 00:34:39.832502 | orchestrator | 2025-09-02 00:34:39.832513 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-02 00:34:39.832524 | orchestrator | Tuesday 02 September 2025 00:34:38 +0000 (0:00:00.126) 0:00:05.075 ***** 2025-09-02 00:34:39.832534 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:34:39.832545 | orchestrator | 2025-09-02 00:34:39.832555 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-02 00:34:39.832566 | orchestrator | Tuesday 02 September 2025 00:34:38 +0000 (0:00:00.091) 0:00:05.166 ***** 2025-09-02 00:34:39.832585 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:34:39.832596 | orchestrator | 2025-09-02 00:34:39.832607 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-02 00:34:39.832618 | orchestrator | Tuesday 02 September 2025 00:34:39 +0000 (0:00:00.679) 0:00:05.846 ***** 2025-09-02 00:34:39.832644 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:34:39.832656 | orchestrator | 2025-09-02 00:34:39.832667 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:34:39.832679 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:39.832691 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:39.832702 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:39.832713 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:39.832724 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:39.832735 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:34:39.832746 | orchestrator | 2025-09-02 00:34:39.832758 | orchestrator | 2025-09-02 00:34:39.832769 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:34:39.832779 | orchestrator | Tuesday 02 September 2025 00:34:39 +0000 (0:00:00.042) 0:00:05.888 ***** 2025-09-02 00:34:39.832791 | orchestrator | =============================================================================== 2025-09-02 00:34:39.832806 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.32s 2025-09-02 00:34:39.832817 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.71s 2025-09-02 00:34:39.832845 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2025-09-02 00:34:40.118382 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-02 00:34:52.161131 | orchestrator | 2025-09-02 00:34:52 | INFO  | Task 5873d673-f648-42dd-9bc0-5544708e59c5 (wait-for-connection) was prepared for execution. 2025-09-02 00:34:52.161302 | orchestrator | 2025-09-02 00:34:52 | INFO  | It takes a moment until task 5873d673-f648-42dd-9bc0-5544708e59c5 (wait-for-connection) has been started and output is visible here. 2025-09-02 00:35:08.184384 | orchestrator | 2025-09-02 00:35:08.184486 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-02 00:35:08.184502 | orchestrator | 2025-09-02 00:35:08.184515 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-02 00:35:08.184527 | orchestrator | Tuesday 02 September 2025 00:34:56 +0000 (0:00:00.270) 0:00:00.270 ***** 2025-09-02 00:35:08.184538 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:35:08.184549 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:35:08.184560 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:35:08.184571 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:35:08.184582 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:35:08.184592 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:35:08.184603 | orchestrator | 2025-09-02 00:35:08.184614 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:35:08.184625 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:35:08.184637 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:35:08.184671 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:35:08.184683 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:35:08.184694 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:35:08.184705 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:35:08.184716 | orchestrator | 2025-09-02 00:35:08.184726 | orchestrator | 2025-09-02 00:35:08.184737 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:35:08.184748 | orchestrator | Tuesday 02 September 2025 00:35:07 +0000 (0:00:11.546) 0:00:11.817 ***** 2025-09-02 00:35:08.184758 | orchestrator | =============================================================================== 2025-09-02 00:35:08.184769 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2025-09-02 00:35:08.494476 | orchestrator | + osism apply hddtemp 2025-09-02 00:35:20.613472 | orchestrator | 2025-09-02 00:35:20 | INFO  | Task 43ee3074-47d9-4182-b136-2e83e2eae83c (hddtemp) was prepared for execution. 2025-09-02 00:35:20.613591 | orchestrator | 2025-09-02 00:35:20 | INFO  | It takes a moment until task 43ee3074-47d9-4182-b136-2e83e2eae83c (hddtemp) has been started and output is visible here. 2025-09-02 00:35:49.253526 | orchestrator | 2025-09-02 00:35:49.255022 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-02 00:35:49.255066 | orchestrator | 2025-09-02 00:35:49.255074 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-02 00:35:49.255140 | orchestrator | Tuesday 02 September 2025 00:35:24 +0000 (0:00:00.274) 0:00:00.274 ***** 2025-09-02 00:35:49.255148 | orchestrator | ok: [testbed-manager] 2025-09-02 00:35:49.255157 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:35:49.255164 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:35:49.255171 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:35:49.255178 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:35:49.255184 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:35:49.255191 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:35:49.255198 | orchestrator | 2025-09-02 00:35:49.255205 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-02 00:35:49.255212 | orchestrator | Tuesday 02 September 2025 00:35:25 +0000 (0:00:00.723) 0:00:00.998 ***** 2025-09-02 00:35:49.255222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:35:49.255231 | orchestrator | 2025-09-02 00:35:49.255238 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-02 00:35:49.255246 | orchestrator | Tuesday 02 September 2025 00:35:26 +0000 (0:00:01.223) 0:00:02.222 ***** 2025-09-02 00:35:49.255257 | orchestrator | ok: [testbed-manager] 2025-09-02 00:35:49.255269 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:35:49.255280 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:35:49.255289 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:35:49.255295 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:35:49.255305 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:35:49.255316 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:35:49.255327 | orchestrator | 2025-09-02 00:35:49.255336 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-02 00:35:49.255346 | orchestrator | Tuesday 02 September 2025 00:35:28 +0000 (0:00:01.973) 0:00:04.195 ***** 2025-09-02 00:35:49.255357 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:35:49.255369 | orchestrator | changed: [testbed-manager] 2025-09-02 00:35:49.255380 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:35:49.255421 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:35:49.255429 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:35:49.255435 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:35:49.255441 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:35:49.255447 | orchestrator | 2025-09-02 00:35:49.255456 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-02 00:35:49.255466 | orchestrator | Tuesday 02 September 2025 00:35:29 +0000 (0:00:01.260) 0:00:05.456 ***** 2025-09-02 00:35:49.255478 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:35:49.255489 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:35:49.255495 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:35:49.255501 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:35:49.255507 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:35:49.255516 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:35:49.255527 | orchestrator | ok: [testbed-manager] 2025-09-02 00:35:49.255538 | orchestrator | 2025-09-02 00:35:49.255549 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-02 00:35:49.255560 | orchestrator | Tuesday 02 September 2025 00:35:31 +0000 (0:00:01.120) 0:00:06.577 ***** 2025-09-02 00:35:49.255571 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:35:49.255582 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:35:49.255591 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:35:49.255597 | orchestrator | changed: [testbed-manager] 2025-09-02 00:35:49.255603 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:35:49.255611 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:35:49.255622 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:35:49.255633 | orchestrator | 2025-09-02 00:35:49.255644 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-02 00:35:49.255654 | orchestrator | Tuesday 02 September 2025 00:35:31 +0000 (0:00:00.874) 0:00:07.452 ***** 2025-09-02 00:35:49.255665 | orchestrator | changed: [testbed-manager] 2025-09-02 00:35:49.255676 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:35:49.255687 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:35:49.255698 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:35:49.255708 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:35:49.255722 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:35:49.255736 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:35:49.255750 | orchestrator | 2025-09-02 00:35:49.255764 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-02 00:35:49.255775 | orchestrator | Tuesday 02 September 2025 00:35:44 +0000 (0:00:12.845) 0:00:20.297 ***** 2025-09-02 00:35:49.255786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:35:49.255797 | orchestrator | 2025-09-02 00:35:49.255808 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-02 00:35:49.255819 | orchestrator | Tuesday 02 September 2025 00:35:46 +0000 (0:00:01.425) 0:00:21.723 ***** 2025-09-02 00:35:49.255830 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:35:49.255841 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:35:49.255852 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:35:49.255862 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:35:49.255873 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:35:49.255883 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:35:49.255894 | orchestrator | changed: [testbed-manager] 2025-09-02 00:35:49.255905 | orchestrator | 2025-09-02 00:35:49.255916 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:35:49.255928 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:35:49.256002 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:35:49.256031 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:35:49.256043 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:35:49.256053 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:35:49.256062 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:35:49.256074 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:35:49.256106 | orchestrator | 2025-09-02 00:35:49.256113 | orchestrator | 2025-09-02 00:35:49.256120 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:35:49.256126 | orchestrator | Tuesday 02 September 2025 00:35:48 +0000 (0:00:02.673) 0:00:24.396 ***** 2025-09-02 00:35:49.256132 | orchestrator | =============================================================================== 2025-09-02 00:35:49.256138 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.85s 2025-09-02 00:35:49.256144 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.67s 2025-09-02 00:35:49.256150 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.97s 2025-09-02 00:35:49.256156 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.43s 2025-09-02 00:35:49.256162 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.26s 2025-09-02 00:35:49.256168 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.22s 2025-09-02 00:35:49.256174 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.12s 2025-09-02 00:35:49.256180 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.87s 2025-09-02 00:35:49.256187 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2025-09-02 00:35:49.558585 | orchestrator | ++ semver latest 7.1.1 2025-09-02 00:35:49.614306 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-02 00:35:49.614402 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-02 00:35:49.614419 | orchestrator | + sudo systemctl restart manager.service 2025-09-02 00:36:29.638748 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-02 00:36:29.638861 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-02 00:36:29.638877 | orchestrator | + local max_attempts=60 2025-09-02 00:36:29.638889 | orchestrator | + local name=ceph-ansible 2025-09-02 00:36:29.638900 | orchestrator | + local attempt_num=1 2025-09-02 00:36:29.638912 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:36:29.679720 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:36:29.679777 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:36:29.679793 | orchestrator | + sleep 5 2025-09-02 00:36:34.683579 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:36:34.716671 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:36:34.716710 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:36:34.716723 | orchestrator | + sleep 5 2025-09-02 00:36:39.720095 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:36:39.754773 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:36:39.754846 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:36:39.754860 | orchestrator | + sleep 5 2025-09-02 00:36:44.759051 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:36:44.798220 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:36:44.798263 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:36:44.798274 | orchestrator | + sleep 5 2025-09-02 00:36:49.802222 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:36:49.836628 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:36:49.836764 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:36:49.836781 | orchestrator | + sleep 5 2025-09-02 00:36:54.841531 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:36:54.887924 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:36:54.887990 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:36:54.888030 | orchestrator | + sleep 5 2025-09-02 00:36:59.893499 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:36:59.934501 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:36:59.934576 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:36:59.934589 | orchestrator | + sleep 5 2025-09-02 00:37:04.939213 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:37:04.984501 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-02 00:37:04.984570 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:37:04.984584 | orchestrator | + sleep 5 2025-09-02 00:37:09.990687 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:37:10.033108 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-02 00:37:10.033188 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:37:10.033203 | orchestrator | + sleep 5 2025-09-02 00:37:15.036695 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:37:15.075653 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-02 00:37:15.075724 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:37:15.075738 | orchestrator | + sleep 5 2025-09-02 00:37:20.080403 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:37:20.112031 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-02 00:37:20.112139 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:37:20.112155 | orchestrator | + sleep 5 2025-09-02 00:37:25.116430 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:37:25.153308 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-02 00:37:25.153374 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:37:25.153388 | orchestrator | + sleep 5 2025-09-02 00:37:30.159919 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:37:30.201864 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-02 00:37:30.201937 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-02 00:37:30.201952 | orchestrator | + sleep 5 2025-09-02 00:37:35.206195 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-02 00:37:35.245024 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:37:35.245073 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-02 00:37:35.245082 | orchestrator | + local max_attempts=60 2025-09-02 00:37:35.245089 | orchestrator | + local name=kolla-ansible 2025-09-02 00:37:35.245096 | orchestrator | + local attempt_num=1 2025-09-02 00:37:35.245979 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-02 00:37:35.289522 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:37:35.289600 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-02 00:37:35.289612 | orchestrator | + local max_attempts=60 2025-09-02 00:37:35.289620 | orchestrator | + local name=osism-ansible 2025-09-02 00:37:35.289627 | orchestrator | + local attempt_num=1 2025-09-02 00:37:35.290283 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-02 00:37:35.328118 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-02 00:37:35.328177 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-02 00:37:35.328186 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-02 00:37:35.496258 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-02 00:37:35.818650 | orchestrator | ARA in osism-ansible already disabled. 2025-09-02 00:37:35.967720 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-02 00:37:35.968566 | orchestrator | + osism apply gather-facts 2025-09-02 00:37:48.040702 | orchestrator | 2025-09-02 00:37:48 | INFO  | Task 7dc7cf69-54f3-4131-99e7-1af94d1d0026 (gather-facts) was prepared for execution. 2025-09-02 00:37:48.040800 | orchestrator | 2025-09-02 00:37:48 | INFO  | It takes a moment until task 7dc7cf69-54f3-4131-99e7-1af94d1d0026 (gather-facts) has been started and output is visible here. 2025-09-02 00:38:01.223868 | orchestrator | 2025-09-02 00:38:01.224027 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-02 00:38:01.224045 | orchestrator | 2025-09-02 00:38:01.224057 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-02 00:38:01.224069 | orchestrator | Tuesday 02 September 2025 00:37:52 +0000 (0:00:00.221) 0:00:00.221 ***** 2025-09-02 00:38:01.224080 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:38:01.224092 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:38:01.224103 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:38:01.224113 | orchestrator | ok: [testbed-manager] 2025-09-02 00:38:01.224124 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:38:01.224134 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:38:01.224145 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:38:01.224156 | orchestrator | 2025-09-02 00:38:01.224166 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-02 00:38:01.224177 | orchestrator | 2025-09-02 00:38:01.224188 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-02 00:38:01.224199 | orchestrator | Tuesday 02 September 2025 00:38:00 +0000 (0:00:08.204) 0:00:08.426 ***** 2025-09-02 00:38:01.224210 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:38:01.224222 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:38:01.224232 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:38:01.224243 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:38:01.224253 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:38:01.224264 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:38:01.224275 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:38:01.224286 | orchestrator | 2025-09-02 00:38:01.224297 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:38:01.224308 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:38:01.224320 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:38:01.224331 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:38:01.224342 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:38:01.224352 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:38:01.224363 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:38:01.224374 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:38:01.224385 | orchestrator | 2025-09-02 00:38:01.224396 | orchestrator | 2025-09-02 00:38:01.224407 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:38:01.224418 | orchestrator | Tuesday 02 September 2025 00:38:00 +0000 (0:00:00.531) 0:00:08.958 ***** 2025-09-02 00:38:01.224430 | orchestrator | =============================================================================== 2025-09-02 00:38:01.224443 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.20s 2025-09-02 00:38:01.224456 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-09-02 00:38:01.560604 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-02 00:38:01.579924 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-02 00:38:01.601743 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-02 00:38:01.620590 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-02 00:38:01.645404 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-02 00:38:01.665677 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-02 00:38:01.681620 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-02 00:38:01.695195 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-02 00:38:01.708893 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-02 00:38:01.727494 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-02 00:38:01.741965 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-02 00:38:01.754129 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-02 00:38:01.765419 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-02 00:38:01.775548 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-02 00:38:01.790283 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-02 00:38:01.810622 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-02 00:38:01.831146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-02 00:38:01.840748 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-02 00:38:01.850834 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-02 00:38:01.862140 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-02 00:38:01.874816 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-02 00:38:02.009513 | orchestrator | ok: Runtime: 0:24:41.442624 2025-09-02 00:38:02.112529 | 2025-09-02 00:38:02.112664 | TASK [Deploy services] 2025-09-02 00:38:02.645421 | orchestrator | skipping: Conditional result was False 2025-09-02 00:38:02.663081 | 2025-09-02 00:38:02.663308 | TASK [Deploy in a nutshell] 2025-09-02 00:38:03.441819 | orchestrator | + set -e 2025-09-02 00:38:03.442132 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-02 00:38:03.442167 | orchestrator | ++ export INTERACTIVE=false 2025-09-02 00:38:03.442190 | orchestrator | ++ INTERACTIVE=false 2025-09-02 00:38:03.442204 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-02 00:38:03.442218 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-02 00:38:03.442232 | orchestrator | + source /opt/manager-vars.sh 2025-09-02 00:38:03.442293 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-02 00:38:03.442324 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-02 00:38:03.442339 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-02 00:38:03.442355 | orchestrator | ++ CEPH_VERSION=reef 2025-09-02 00:38:03.442368 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-02 00:38:03.442387 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-02 00:38:03.442402 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-02 00:38:03.442435 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-02 00:38:03.442455 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-02 00:38:03.442479 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-02 00:38:03.442501 | orchestrator | ++ export ARA=false 2025-09-02 00:38:03.442521 | orchestrator | ++ ARA=false 2025-09-02 00:38:03.442535 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-02 00:38:03.442548 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-02 00:38:03.442559 | orchestrator | ++ export TEMPEST=true 2025-09-02 00:38:03.442570 | orchestrator | ++ TEMPEST=true 2025-09-02 00:38:03.442581 | orchestrator | ++ export IS_ZUUL=true 2025-09-02 00:38:03.442592 | orchestrator | ++ IS_ZUUL=true 2025-09-02 00:38:03.442604 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.185 2025-09-02 00:38:03.442616 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.185 2025-09-02 00:38:03.442627 | orchestrator | ++ export EXTERNAL_API=false 2025-09-02 00:38:03.442648 | orchestrator | 2025-09-02 00:38:03.442666 | orchestrator | # PULL IMAGES 2025-09-02 00:38:03.442682 | orchestrator | 2025-09-02 00:38:03.442698 | orchestrator | ++ EXTERNAL_API=false 2025-09-02 00:38:03.442714 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-02 00:38:03.442732 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-02 00:38:03.442787 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-02 00:38:03.442806 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-02 00:38:03.442825 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-02 00:38:03.442855 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-02 00:38:03.442877 | orchestrator | + echo 2025-09-02 00:38:03.442897 | orchestrator | + echo '# PULL IMAGES' 2025-09-02 00:38:03.442912 | orchestrator | + echo 2025-09-02 00:38:03.442930 | orchestrator | ++ semver latest 7.0.0 2025-09-02 00:38:03.490664 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-02 00:38:03.490707 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-02 00:38:03.490721 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-02 00:38:05.366258 | orchestrator | 2025-09-02 00:38:05 | INFO  | Trying to run play pull-images in environment custom 2025-09-02 00:38:15.507579 | orchestrator | 2025-09-02 00:38:15 | INFO  | Task 885f3b0d-89fb-46cd-b4b2-d26c5429c6c8 (pull-images) was prepared for execution. 2025-09-02 00:38:15.512448 | orchestrator | 2025-09-02 00:38:15 | INFO  | Task 885f3b0d-89fb-46cd-b4b2-d26c5429c6c8 is running in background. No more output. Check ARA for logs. 2025-09-02 00:38:17.831655 | orchestrator | 2025-09-02 00:38:17 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-02 00:38:28.011073 | orchestrator | 2025-09-02 00:38:28 | INFO  | Task f2b29ce1-1230-4ef2-a553-403c2bba5220 (wipe-partitions) was prepared for execution. 2025-09-02 00:38:28.011197 | orchestrator | 2025-09-02 00:38:28 | INFO  | It takes a moment until task f2b29ce1-1230-4ef2-a553-403c2bba5220 (wipe-partitions) has been started and output is visible here. 2025-09-02 00:38:41.457148 | orchestrator | 2025-09-02 00:38:41.457262 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-02 00:38:41.457279 | orchestrator | 2025-09-02 00:38:41.457291 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-02 00:38:41.458692 | orchestrator | Tuesday 02 September 2025 00:38:32 +0000 (0:00:00.142) 0:00:00.142 ***** 2025-09-02 00:38:41.458731 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:38:41.458771 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:38:41.458783 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:38:41.458794 | orchestrator | 2025-09-02 00:38:41.458806 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-02 00:38:41.458848 | orchestrator | Tuesday 02 September 2025 00:38:32 +0000 (0:00:00.587) 0:00:00.730 ***** 2025-09-02 00:38:41.458860 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:38:41.458871 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:38:41.458934 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:38:41.458947 | orchestrator | 2025-09-02 00:38:41.458958 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-02 00:38:41.458969 | orchestrator | Tuesday 02 September 2025 00:38:33 +0000 (0:00:00.260) 0:00:00.991 ***** 2025-09-02 00:38:41.458980 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:38:41.459004 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:38:41.459015 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:38:41.459026 | orchestrator | 2025-09-02 00:38:41.459037 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-02 00:38:41.459048 | orchestrator | Tuesday 02 September 2025 00:38:33 +0000 (0:00:00.717) 0:00:01.708 ***** 2025-09-02 00:38:41.459059 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:38:41.459070 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:38:41.459081 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:38:41.459092 | orchestrator | 2025-09-02 00:38:41.459103 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-02 00:38:41.459114 | orchestrator | Tuesday 02 September 2025 00:38:34 +0000 (0:00:00.246) 0:00:01.955 ***** 2025-09-02 00:38:41.459125 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-02 00:38:41.459153 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-02 00:38:41.459165 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-02 00:38:41.459176 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-02 00:38:41.459186 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-02 00:38:41.459197 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-02 00:38:41.459208 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-02 00:38:41.459219 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-02 00:38:41.459230 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-02 00:38:41.459241 | orchestrator | 2025-09-02 00:38:41.459251 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-02 00:38:41.459263 | orchestrator | Tuesday 02 September 2025 00:38:36 +0000 (0:00:01.999) 0:00:03.955 ***** 2025-09-02 00:38:41.459274 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-02 00:38:41.459285 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-02 00:38:41.459296 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-02 00:38:41.459307 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-02 00:38:41.459318 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-02 00:38:41.459328 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-02 00:38:41.459339 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-02 00:38:41.459350 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-02 00:38:41.459361 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-02 00:38:41.459371 | orchestrator | 2025-09-02 00:38:41.459382 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-02 00:38:41.459393 | orchestrator | Tuesday 02 September 2025 00:38:37 +0000 (0:00:01.400) 0:00:05.356 ***** 2025-09-02 00:38:41.459403 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-02 00:38:41.459414 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-02 00:38:41.459425 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-02 00:38:41.459436 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-02 00:38:41.459446 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-02 00:38:41.459464 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-02 00:38:41.459475 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-02 00:38:41.459495 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-02 00:38:41.459506 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-02 00:38:41.459517 | orchestrator | 2025-09-02 00:38:41.459527 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-02 00:38:41.459538 | orchestrator | Tuesday 02 September 2025 00:38:39 +0000 (0:00:02.555) 0:00:07.911 ***** 2025-09-02 00:38:41.459549 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:38:41.459560 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:38:41.459571 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:38:41.459582 | orchestrator | 2025-09-02 00:38:41.459592 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-02 00:38:41.459603 | orchestrator | Tuesday 02 September 2025 00:38:40 +0000 (0:00:00.573) 0:00:08.484 ***** 2025-09-02 00:38:41.459614 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:38:41.459625 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:38:41.459636 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:38:41.459646 | orchestrator | 2025-09-02 00:38:41.459657 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:38:41.459670 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:38:41.459682 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:38:41.459715 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:38:41.459726 | orchestrator | 2025-09-02 00:38:41.459738 | orchestrator | 2025-09-02 00:38:41.459749 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:38:41.459760 | orchestrator | Tuesday 02 September 2025 00:38:41 +0000 (0:00:00.591) 0:00:09.075 ***** 2025-09-02 00:38:41.459770 | orchestrator | =============================================================================== 2025-09-02 00:38:41.459781 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.56s 2025-09-02 00:38:41.459792 | orchestrator | Check device availability ----------------------------------------------- 2.00s 2025-09-02 00:38:41.459802 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.40s 2025-09-02 00:38:41.459813 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.72s 2025-09-02 00:38:41.459823 | orchestrator | Request device events from the kernel ----------------------------------- 0.59s 2025-09-02 00:38:41.459834 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-09-02 00:38:41.459845 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2025-09-02 00:38:41.459856 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-09-02 00:38:41.459867 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-09-02 00:38:53.791787 | orchestrator | 2025-09-02 00:38:53 | INFO  | Task 9aec0a68-b78f-4488-ace3-55cf68489256 (facts) was prepared for execution. 2025-09-02 00:38:53.791893 | orchestrator | 2025-09-02 00:38:53 | INFO  | It takes a moment until task 9aec0a68-b78f-4488-ace3-55cf68489256 (facts) has been started and output is visible here. 2025-09-02 00:39:05.975571 | orchestrator | 2025-09-02 00:39:05.975711 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-02 00:39:05.976501 | orchestrator | 2025-09-02 00:39:05.976523 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-02 00:39:05.976536 | orchestrator | Tuesday 02 September 2025 00:38:57 +0000 (0:00:00.287) 0:00:00.287 ***** 2025-09-02 00:39:05.976548 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:39:05.976561 | orchestrator | ok: [testbed-manager] 2025-09-02 00:39:05.976572 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:39:05.976612 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:39:05.976624 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:39:05.976635 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:39:05.976645 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:39:05.976656 | orchestrator | 2025-09-02 00:39:05.976670 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-02 00:39:05.976681 | orchestrator | Tuesday 02 September 2025 00:38:58 +0000 (0:00:01.120) 0:00:01.407 ***** 2025-09-02 00:39:05.976692 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:39:05.976703 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:39:05.976714 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:39:05.976725 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:39:05.976735 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:05.976746 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:05.976764 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:05.976782 | orchestrator | 2025-09-02 00:39:05.976801 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-02 00:39:05.976819 | orchestrator | 2025-09-02 00:39:05.976838 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-02 00:39:05.976853 | orchestrator | Tuesday 02 September 2025 00:39:00 +0000 (0:00:01.338) 0:00:02.745 ***** 2025-09-02 00:39:05.976864 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:39:05.976875 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:39:05.976886 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:39:05.976897 | orchestrator | ok: [testbed-manager] 2025-09-02 00:39:05.976908 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:39:05.976918 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:39:05.976929 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:39:05.976939 | orchestrator | 2025-09-02 00:39:05.976950 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-02 00:39:05.976961 | orchestrator | 2025-09-02 00:39:05.976972 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-02 00:39:05.976998 | orchestrator | Tuesday 02 September 2025 00:39:04 +0000 (0:00:04.565) 0:00:07.311 ***** 2025-09-02 00:39:05.977010 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:39:05.977021 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:39:05.977032 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:39:05.977042 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:39:05.977084 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:05.977095 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:05.977105 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:05.977116 | orchestrator | 2025-09-02 00:39:05.977127 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:39:05.977138 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:39:05.977151 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:39:05.977162 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:39:05.977173 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:39:05.977184 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:39:05.977196 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:39:05.977206 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:39:05.977217 | orchestrator | 2025-09-02 00:39:05.977239 | orchestrator | 2025-09-02 00:39:05.977251 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:39:05.977262 | orchestrator | Tuesday 02 September 2025 00:39:05 +0000 (0:00:00.721) 0:00:08.032 ***** 2025-09-02 00:39:05.977272 | orchestrator | =============================================================================== 2025-09-02 00:39:05.977283 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.57s 2025-09-02 00:39:05.977294 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2025-09-02 00:39:05.977305 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2025-09-02 00:39:05.977316 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-09-02 00:39:08.335583 | orchestrator | 2025-09-02 00:39:08 | INFO  | Task 85572251-d598-487a-bcba-da0226699d86 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-02 00:39:08.335690 | orchestrator | 2025-09-02 00:39:08 | INFO  | It takes a moment until task 85572251-d598-487a-bcba-da0226699d86 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-02 00:39:20.329284 | orchestrator | 2025-09-02 00:39:20.329399 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-02 00:39:20.329415 | orchestrator | 2025-09-02 00:39:20.329427 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-02 00:39:20.329441 | orchestrator | Tuesday 02 September 2025 00:39:12 +0000 (0:00:00.327) 0:00:00.327 ***** 2025-09-02 00:39:20.329453 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-02 00:39:20.329464 | orchestrator | 2025-09-02 00:39:20.329475 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-02 00:39:20.329486 | orchestrator | Tuesday 02 September 2025 00:39:12 +0000 (0:00:00.256) 0:00:00.583 ***** 2025-09-02 00:39:20.329498 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:39:20.329510 | orchestrator | 2025-09-02 00:39:20.329521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.329532 | orchestrator | Tuesday 02 September 2025 00:39:13 +0000 (0:00:00.231) 0:00:00.815 ***** 2025-09-02 00:39:20.329543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-02 00:39:20.329555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-02 00:39:20.329566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-02 00:39:20.329577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-02 00:39:20.329588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-02 00:39:20.329598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-02 00:39:20.329609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-02 00:39:20.329620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-02 00:39:20.329631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-02 00:39:20.329642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-02 00:39:20.329652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-02 00:39:20.329671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-02 00:39:20.329683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-02 00:39:20.329694 | orchestrator | 2025-09-02 00:39:20.329705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.329716 | orchestrator | Tuesday 02 September 2025 00:39:13 +0000 (0:00:00.415) 0:00:01.230 ***** 2025-09-02 00:39:20.329727 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.329762 | orchestrator | 2025-09-02 00:39:20.329776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.329789 | orchestrator | Tuesday 02 September 2025 00:39:13 +0000 (0:00:00.468) 0:00:01.698 ***** 2025-09-02 00:39:20.329801 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.329813 | orchestrator | 2025-09-02 00:39:20.329825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.329838 | orchestrator | Tuesday 02 September 2025 00:39:14 +0000 (0:00:00.197) 0:00:01.896 ***** 2025-09-02 00:39:20.329851 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.329864 | orchestrator | 2025-09-02 00:39:20.329875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.329888 | orchestrator | Tuesday 02 September 2025 00:39:14 +0000 (0:00:00.210) 0:00:02.107 ***** 2025-09-02 00:39:20.329900 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.329916 | orchestrator | 2025-09-02 00:39:20.329930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.329943 | orchestrator | Tuesday 02 September 2025 00:39:14 +0000 (0:00:00.210) 0:00:02.317 ***** 2025-09-02 00:39:20.329956 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.329968 | orchestrator | 2025-09-02 00:39:20.329981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.329993 | orchestrator | Tuesday 02 September 2025 00:39:14 +0000 (0:00:00.225) 0:00:02.542 ***** 2025-09-02 00:39:20.330006 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330076 | orchestrator | 2025-09-02 00:39:20.330090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.330104 | orchestrator | Tuesday 02 September 2025 00:39:15 +0000 (0:00:00.190) 0:00:02.732 ***** 2025-09-02 00:39:20.330114 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330125 | orchestrator | 2025-09-02 00:39:20.330155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.330167 | orchestrator | Tuesday 02 September 2025 00:39:15 +0000 (0:00:00.204) 0:00:02.937 ***** 2025-09-02 00:39:20.330177 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330188 | orchestrator | 2025-09-02 00:39:20.330198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.330209 | orchestrator | Tuesday 02 September 2025 00:39:15 +0000 (0:00:00.218) 0:00:03.156 ***** 2025-09-02 00:39:20.330220 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b) 2025-09-02 00:39:20.330232 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b) 2025-09-02 00:39:20.330243 | orchestrator | 2025-09-02 00:39:20.330253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.330264 | orchestrator | Tuesday 02 September 2025 00:39:15 +0000 (0:00:00.402) 0:00:03.558 ***** 2025-09-02 00:39:20.330293 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43) 2025-09-02 00:39:20.330305 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43) 2025-09-02 00:39:20.330316 | orchestrator | 2025-09-02 00:39:20.330326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.330337 | orchestrator | Tuesday 02 September 2025 00:39:16 +0000 (0:00:00.401) 0:00:03.960 ***** 2025-09-02 00:39:20.330348 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3) 2025-09-02 00:39:20.330358 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3) 2025-09-02 00:39:20.330369 | orchestrator | 2025-09-02 00:39:20.330380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.330390 | orchestrator | Tuesday 02 September 2025 00:39:16 +0000 (0:00:00.607) 0:00:04.568 ***** 2025-09-02 00:39:20.330401 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498) 2025-09-02 00:39:20.330420 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498) 2025-09-02 00:39:20.330430 | orchestrator | 2025-09-02 00:39:20.330441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:20.330452 | orchestrator | Tuesday 02 September 2025 00:39:17 +0000 (0:00:00.636) 0:00:05.204 ***** 2025-09-02 00:39:20.330462 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-02 00:39:20.330473 | orchestrator | 2025-09-02 00:39:20.330483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:20.330500 | orchestrator | Tuesday 02 September 2025 00:39:18 +0000 (0:00:00.769) 0:00:05.974 ***** 2025-09-02 00:39:20.330511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-02 00:39:20.330522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-02 00:39:20.330532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-02 00:39:20.330543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-02 00:39:20.330553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-02 00:39:20.330564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-02 00:39:20.330574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-02 00:39:20.330585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-02 00:39:20.330595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-02 00:39:20.330606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-02 00:39:20.330616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-02 00:39:20.330627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-02 00:39:20.330637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-02 00:39:20.330648 | orchestrator | 2025-09-02 00:39:20.330659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:20.330669 | orchestrator | Tuesday 02 September 2025 00:39:18 +0000 (0:00:00.379) 0:00:06.354 ***** 2025-09-02 00:39:20.330680 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330690 | orchestrator | 2025-09-02 00:39:20.330701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:20.330712 | orchestrator | Tuesday 02 September 2025 00:39:18 +0000 (0:00:00.213) 0:00:06.567 ***** 2025-09-02 00:39:20.330722 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330733 | orchestrator | 2025-09-02 00:39:20.330744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:20.330754 | orchestrator | Tuesday 02 September 2025 00:39:19 +0000 (0:00:00.211) 0:00:06.779 ***** 2025-09-02 00:39:20.330765 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330775 | orchestrator | 2025-09-02 00:39:20.330786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:20.330797 | orchestrator | Tuesday 02 September 2025 00:39:19 +0000 (0:00:00.221) 0:00:07.000 ***** 2025-09-02 00:39:20.330807 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330818 | orchestrator | 2025-09-02 00:39:20.330829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:20.330839 | orchestrator | Tuesday 02 September 2025 00:39:19 +0000 (0:00:00.207) 0:00:07.208 ***** 2025-09-02 00:39:20.330850 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330861 | orchestrator | 2025-09-02 00:39:20.330878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:20.330889 | orchestrator | Tuesday 02 September 2025 00:39:19 +0000 (0:00:00.198) 0:00:07.407 ***** 2025-09-02 00:39:20.330899 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330910 | orchestrator | 2025-09-02 00:39:20.330920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:20.330931 | orchestrator | Tuesday 02 September 2025 00:39:19 +0000 (0:00:00.203) 0:00:07.610 ***** 2025-09-02 00:39:20.330942 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:20.330952 | orchestrator | 2025-09-02 00:39:20.330963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:20.330974 | orchestrator | Tuesday 02 September 2025 00:39:20 +0000 (0:00:00.236) 0:00:07.847 ***** 2025-09-02 00:39:20.330991 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.105883 | orchestrator | 2025-09-02 00:39:28.106008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:28.106099 | orchestrator | Tuesday 02 September 2025 00:39:20 +0000 (0:00:00.183) 0:00:08.030 ***** 2025-09-02 00:39:28.106119 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-02 00:39:28.106133 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-02 00:39:28.106144 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-02 00:39:28.106155 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-02 00:39:28.106166 | orchestrator | 2025-09-02 00:39:28.106226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:28.106239 | orchestrator | Tuesday 02 September 2025 00:39:21 +0000 (0:00:01.056) 0:00:09.087 ***** 2025-09-02 00:39:28.106250 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106261 | orchestrator | 2025-09-02 00:39:28.106273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:28.106284 | orchestrator | Tuesday 02 September 2025 00:39:21 +0000 (0:00:00.201) 0:00:09.289 ***** 2025-09-02 00:39:28.106295 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106306 | orchestrator | 2025-09-02 00:39:28.106317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:28.106328 | orchestrator | Tuesday 02 September 2025 00:39:21 +0000 (0:00:00.218) 0:00:09.507 ***** 2025-09-02 00:39:28.106339 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106350 | orchestrator | 2025-09-02 00:39:28.106361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:28.106372 | orchestrator | Tuesday 02 September 2025 00:39:22 +0000 (0:00:00.210) 0:00:09.718 ***** 2025-09-02 00:39:28.106383 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106393 | orchestrator | 2025-09-02 00:39:28.106404 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-02 00:39:28.106418 | orchestrator | Tuesday 02 September 2025 00:39:22 +0000 (0:00:00.222) 0:00:09.941 ***** 2025-09-02 00:39:28.106431 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-02 00:39:28.106444 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-02 00:39:28.106456 | orchestrator | 2025-09-02 00:39:28.106468 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-02 00:39:28.106480 | orchestrator | Tuesday 02 September 2025 00:39:22 +0000 (0:00:00.193) 0:00:10.134 ***** 2025-09-02 00:39:28.106513 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106527 | orchestrator | 2025-09-02 00:39:28.106539 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-02 00:39:28.106551 | orchestrator | Tuesday 02 September 2025 00:39:22 +0000 (0:00:00.129) 0:00:10.264 ***** 2025-09-02 00:39:28.106563 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106576 | orchestrator | 2025-09-02 00:39:28.106589 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-02 00:39:28.106601 | orchestrator | Tuesday 02 September 2025 00:39:22 +0000 (0:00:00.136) 0:00:10.400 ***** 2025-09-02 00:39:28.106613 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106650 | orchestrator | 2025-09-02 00:39:28.106663 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-02 00:39:28.106676 | orchestrator | Tuesday 02 September 2025 00:39:22 +0000 (0:00:00.155) 0:00:10.556 ***** 2025-09-02 00:39:28.106688 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:39:28.106701 | orchestrator | 2025-09-02 00:39:28.106714 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-02 00:39:28.106727 | orchestrator | Tuesday 02 September 2025 00:39:23 +0000 (0:00:00.156) 0:00:10.712 ***** 2025-09-02 00:39:28.106740 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'}}) 2025-09-02 00:39:28.106753 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '688b3bb6-a638-5f84-8470-ce7969c766cd'}}) 2025-09-02 00:39:28.106766 | orchestrator | 2025-09-02 00:39:28.106778 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-02 00:39:28.106788 | orchestrator | Tuesday 02 September 2025 00:39:23 +0000 (0:00:00.154) 0:00:10.867 ***** 2025-09-02 00:39:28.106800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'}})  2025-09-02 00:39:28.106819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '688b3bb6-a638-5f84-8470-ce7969c766cd'}})  2025-09-02 00:39:28.106831 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106841 | orchestrator | 2025-09-02 00:39:28.106852 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-02 00:39:28.106863 | orchestrator | Tuesday 02 September 2025 00:39:23 +0000 (0:00:00.144) 0:00:11.011 ***** 2025-09-02 00:39:28.106874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'}})  2025-09-02 00:39:28.106885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '688b3bb6-a638-5f84-8470-ce7969c766cd'}})  2025-09-02 00:39:28.106896 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106906 | orchestrator | 2025-09-02 00:39:28.106917 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-02 00:39:28.106928 | orchestrator | Tuesday 02 September 2025 00:39:23 +0000 (0:00:00.398) 0:00:11.409 ***** 2025-09-02 00:39:28.106939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'}})  2025-09-02 00:39:28.106950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '688b3bb6-a638-5f84-8470-ce7969c766cd'}})  2025-09-02 00:39:28.106961 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.106972 | orchestrator | 2025-09-02 00:39:28.107002 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-02 00:39:28.107014 | orchestrator | Tuesday 02 September 2025 00:39:23 +0000 (0:00:00.179) 0:00:11.589 ***** 2025-09-02 00:39:28.107024 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:39:28.107035 | orchestrator | 2025-09-02 00:39:28.107052 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-02 00:39:28.107063 | orchestrator | Tuesday 02 September 2025 00:39:24 +0000 (0:00:00.145) 0:00:11.734 ***** 2025-09-02 00:39:28.107074 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:39:28.107085 | orchestrator | 2025-09-02 00:39:28.107096 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-02 00:39:28.107107 | orchestrator | Tuesday 02 September 2025 00:39:24 +0000 (0:00:00.138) 0:00:11.873 ***** 2025-09-02 00:39:28.107118 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.107129 | orchestrator | 2025-09-02 00:39:28.107140 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-02 00:39:28.107151 | orchestrator | Tuesday 02 September 2025 00:39:24 +0000 (0:00:00.141) 0:00:12.014 ***** 2025-09-02 00:39:28.107161 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.107172 | orchestrator | 2025-09-02 00:39:28.107209 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-02 00:39:28.107221 | orchestrator | Tuesday 02 September 2025 00:39:24 +0000 (0:00:00.146) 0:00:12.161 ***** 2025-09-02 00:39:28.107232 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.107243 | orchestrator | 2025-09-02 00:39:28.107253 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-02 00:39:28.107264 | orchestrator | Tuesday 02 September 2025 00:39:24 +0000 (0:00:00.153) 0:00:12.314 ***** 2025-09-02 00:39:28.107275 | orchestrator | ok: [testbed-node-3] => { 2025-09-02 00:39:28.107286 | orchestrator |  "ceph_osd_devices": { 2025-09-02 00:39:28.107297 | orchestrator |  "sdb": { 2025-09-02 00:39:28.107309 | orchestrator |  "osd_lvm_uuid": "13b5fa21-9dd3-5f23-9982-99f7e2a8b07c" 2025-09-02 00:39:28.107320 | orchestrator |  }, 2025-09-02 00:39:28.107331 | orchestrator |  "sdc": { 2025-09-02 00:39:28.107342 | orchestrator |  "osd_lvm_uuid": "688b3bb6-a638-5f84-8470-ce7969c766cd" 2025-09-02 00:39:28.107352 | orchestrator |  } 2025-09-02 00:39:28.107363 | orchestrator |  } 2025-09-02 00:39:28.107374 | orchestrator | } 2025-09-02 00:39:28.107385 | orchestrator | 2025-09-02 00:39:28.107396 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-02 00:39:28.107407 | orchestrator | Tuesday 02 September 2025 00:39:24 +0000 (0:00:00.148) 0:00:12.462 ***** 2025-09-02 00:39:28.107418 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.107428 | orchestrator | 2025-09-02 00:39:28.107439 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-02 00:39:28.107450 | orchestrator | Tuesday 02 September 2025 00:39:24 +0000 (0:00:00.132) 0:00:12.595 ***** 2025-09-02 00:39:28.107461 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.107471 | orchestrator | 2025-09-02 00:39:28.107482 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-02 00:39:28.107493 | orchestrator | Tuesday 02 September 2025 00:39:25 +0000 (0:00:00.130) 0:00:12.726 ***** 2025-09-02 00:39:28.107503 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:39:28.107514 | orchestrator | 2025-09-02 00:39:28.107525 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-02 00:39:28.107535 | orchestrator | Tuesday 02 September 2025 00:39:25 +0000 (0:00:00.136) 0:00:12.863 ***** 2025-09-02 00:39:28.107546 | orchestrator | changed: [testbed-node-3] => { 2025-09-02 00:39:28.107557 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-02 00:39:28.107568 | orchestrator |  "ceph_osd_devices": { 2025-09-02 00:39:28.107578 | orchestrator |  "sdb": { 2025-09-02 00:39:28.107589 | orchestrator |  "osd_lvm_uuid": "13b5fa21-9dd3-5f23-9982-99f7e2a8b07c" 2025-09-02 00:39:28.107600 | orchestrator |  }, 2025-09-02 00:39:28.107611 | orchestrator |  "sdc": { 2025-09-02 00:39:28.107622 | orchestrator |  "osd_lvm_uuid": "688b3bb6-a638-5f84-8470-ce7969c766cd" 2025-09-02 00:39:28.107632 | orchestrator |  } 2025-09-02 00:39:28.107643 | orchestrator |  }, 2025-09-02 00:39:28.107654 | orchestrator |  "lvm_volumes": [ 2025-09-02 00:39:28.107665 | orchestrator |  { 2025-09-02 00:39:28.107676 | orchestrator |  "data": "osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c", 2025-09-02 00:39:28.107686 | orchestrator |  "data_vg": "ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c" 2025-09-02 00:39:28.107697 | orchestrator |  }, 2025-09-02 00:39:28.107708 | orchestrator |  { 2025-09-02 00:39:28.107718 | orchestrator |  "data": "osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd", 2025-09-02 00:39:28.107729 | orchestrator |  "data_vg": "ceph-688b3bb6-a638-5f84-8470-ce7969c766cd" 2025-09-02 00:39:28.107740 | orchestrator |  } 2025-09-02 00:39:28.107750 | orchestrator |  ] 2025-09-02 00:39:28.107761 | orchestrator |  } 2025-09-02 00:39:28.107772 | orchestrator | } 2025-09-02 00:39:28.107783 | orchestrator | 2025-09-02 00:39:28.107799 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-02 00:39:28.107817 | orchestrator | Tuesday 02 September 2025 00:39:25 +0000 (0:00:00.225) 0:00:13.088 ***** 2025-09-02 00:39:28.107828 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-02 00:39:28.107839 | orchestrator | 2025-09-02 00:39:28.107849 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-02 00:39:28.107860 | orchestrator | 2025-09-02 00:39:28.107871 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-02 00:39:28.107881 | orchestrator | Tuesday 02 September 2025 00:39:27 +0000 (0:00:02.223) 0:00:15.312 ***** 2025-09-02 00:39:28.107892 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-02 00:39:28.107903 | orchestrator | 2025-09-02 00:39:28.107913 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-02 00:39:28.107924 | orchestrator | Tuesday 02 September 2025 00:39:27 +0000 (0:00:00.253) 0:00:15.566 ***** 2025-09-02 00:39:28.107935 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:39:28.107945 | orchestrator | 2025-09-02 00:39:28.107956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:28.107974 | orchestrator | Tuesday 02 September 2025 00:39:28 +0000 (0:00:00.242) 0:00:15.808 ***** 2025-09-02 00:39:36.133506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-02 00:39:36.133614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-02 00:39:36.133629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-02 00:39:36.133641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-02 00:39:36.133652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-02 00:39:36.133663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-02 00:39:36.133674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-02 00:39:36.133684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-02 00:39:36.133695 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-02 00:39:36.133706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-02 00:39:36.133717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-02 00:39:36.133727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-02 00:39:36.133738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-02 00:39:36.133753 | orchestrator | 2025-09-02 00:39:36.133765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.133778 | orchestrator | Tuesday 02 September 2025 00:39:28 +0000 (0:00:00.418) 0:00:16.227 ***** 2025-09-02 00:39:36.133789 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.133801 | orchestrator | 2025-09-02 00:39:36.133812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.133823 | orchestrator | Tuesday 02 September 2025 00:39:28 +0000 (0:00:00.208) 0:00:16.436 ***** 2025-09-02 00:39:36.133834 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.133845 | orchestrator | 2025-09-02 00:39:36.133855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.133866 | orchestrator | Tuesday 02 September 2025 00:39:28 +0000 (0:00:00.199) 0:00:16.635 ***** 2025-09-02 00:39:36.133877 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.133888 | orchestrator | 2025-09-02 00:39:36.133899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.133910 | orchestrator | Tuesday 02 September 2025 00:39:29 +0000 (0:00:00.205) 0:00:16.841 ***** 2025-09-02 00:39:36.133920 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.133958 | orchestrator | 2025-09-02 00:39:36.133970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.133981 | orchestrator | Tuesday 02 September 2025 00:39:29 +0000 (0:00:00.213) 0:00:17.055 ***** 2025-09-02 00:39:36.133992 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134002 | orchestrator | 2025-09-02 00:39:36.134013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.134082 | orchestrator | Tuesday 02 September 2025 00:39:29 +0000 (0:00:00.595) 0:00:17.650 ***** 2025-09-02 00:39:36.134095 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134108 | orchestrator | 2025-09-02 00:39:36.134121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.134134 | orchestrator | Tuesday 02 September 2025 00:39:30 +0000 (0:00:00.213) 0:00:17.864 ***** 2025-09-02 00:39:36.134162 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134177 | orchestrator | 2025-09-02 00:39:36.134190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.134203 | orchestrator | Tuesday 02 September 2025 00:39:30 +0000 (0:00:00.205) 0:00:18.069 ***** 2025-09-02 00:39:36.134216 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134256 | orchestrator | 2025-09-02 00:39:36.134270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.134283 | orchestrator | Tuesday 02 September 2025 00:39:30 +0000 (0:00:00.215) 0:00:18.285 ***** 2025-09-02 00:39:36.134294 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7) 2025-09-02 00:39:36.134306 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7) 2025-09-02 00:39:36.134317 | orchestrator | 2025-09-02 00:39:36.134328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.134339 | orchestrator | Tuesday 02 September 2025 00:39:30 +0000 (0:00:00.409) 0:00:18.694 ***** 2025-09-02 00:39:36.134350 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd) 2025-09-02 00:39:36.134361 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd) 2025-09-02 00:39:36.134371 | orchestrator | 2025-09-02 00:39:36.134382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.134393 | orchestrator | Tuesday 02 September 2025 00:39:31 +0000 (0:00:00.445) 0:00:19.139 ***** 2025-09-02 00:39:36.134404 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a) 2025-09-02 00:39:36.134415 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a) 2025-09-02 00:39:36.134425 | orchestrator | 2025-09-02 00:39:36.134436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.134447 | orchestrator | Tuesday 02 September 2025 00:39:31 +0000 (0:00:00.453) 0:00:19.592 ***** 2025-09-02 00:39:36.134475 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e) 2025-09-02 00:39:36.134487 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e) 2025-09-02 00:39:36.134497 | orchestrator | 2025-09-02 00:39:36.134508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:36.134519 | orchestrator | Tuesday 02 September 2025 00:39:32 +0000 (0:00:00.501) 0:00:20.094 ***** 2025-09-02 00:39:36.134530 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-02 00:39:36.134541 | orchestrator | 2025-09-02 00:39:36.134552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.134563 | orchestrator | Tuesday 02 September 2025 00:39:32 +0000 (0:00:00.346) 0:00:20.441 ***** 2025-09-02 00:39:36.134573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-02 00:39:36.134595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-02 00:39:36.134606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-02 00:39:36.134617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-02 00:39:36.134628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-02 00:39:36.134638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-02 00:39:36.134649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-02 00:39:36.134660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-02 00:39:36.134670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-02 00:39:36.134681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-02 00:39:36.134692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-02 00:39:36.134702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-02 00:39:36.134713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-02 00:39:36.134723 | orchestrator | 2025-09-02 00:39:36.134734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.134745 | orchestrator | Tuesday 02 September 2025 00:39:33 +0000 (0:00:00.400) 0:00:20.841 ***** 2025-09-02 00:39:36.134756 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134767 | orchestrator | 2025-09-02 00:39:36.134777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.134788 | orchestrator | Tuesday 02 September 2025 00:39:33 +0000 (0:00:00.255) 0:00:21.097 ***** 2025-09-02 00:39:36.134799 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134809 | orchestrator | 2025-09-02 00:39:36.134827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.134838 | orchestrator | Tuesday 02 September 2025 00:39:34 +0000 (0:00:00.721) 0:00:21.818 ***** 2025-09-02 00:39:36.134849 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134860 | orchestrator | 2025-09-02 00:39:36.134871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.134882 | orchestrator | Tuesday 02 September 2025 00:39:34 +0000 (0:00:00.196) 0:00:22.015 ***** 2025-09-02 00:39:36.134893 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134903 | orchestrator | 2025-09-02 00:39:36.134914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.134925 | orchestrator | Tuesday 02 September 2025 00:39:34 +0000 (0:00:00.209) 0:00:22.225 ***** 2025-09-02 00:39:36.134936 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134947 | orchestrator | 2025-09-02 00:39:36.134958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.134968 | orchestrator | Tuesday 02 September 2025 00:39:34 +0000 (0:00:00.195) 0:00:22.421 ***** 2025-09-02 00:39:36.134979 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.134990 | orchestrator | 2025-09-02 00:39:36.135001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.135011 | orchestrator | Tuesday 02 September 2025 00:39:34 +0000 (0:00:00.195) 0:00:22.616 ***** 2025-09-02 00:39:36.135022 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.135033 | orchestrator | 2025-09-02 00:39:36.135043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.135054 | orchestrator | Tuesday 02 September 2025 00:39:35 +0000 (0:00:00.193) 0:00:22.810 ***** 2025-09-02 00:39:36.135065 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.135076 | orchestrator | 2025-09-02 00:39:36.135087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.135104 | orchestrator | Tuesday 02 September 2025 00:39:35 +0000 (0:00:00.186) 0:00:22.996 ***** 2025-09-02 00:39:36.135114 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-02 00:39:36.135126 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-02 00:39:36.135137 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-02 00:39:36.135148 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-02 00:39:36.135159 | orchestrator | 2025-09-02 00:39:36.135170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:36.135181 | orchestrator | Tuesday 02 September 2025 00:39:35 +0000 (0:00:00.650) 0:00:23.646 ***** 2025-09-02 00:39:36.135191 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:36.135202 | orchestrator | 2025-09-02 00:39:36.135219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:42.027030 | orchestrator | Tuesday 02 September 2025 00:39:36 +0000 (0:00:00.189) 0:00:23.836 ***** 2025-09-02 00:39:42.027146 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027163 | orchestrator | 2025-09-02 00:39:42.027177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:42.027188 | orchestrator | Tuesday 02 September 2025 00:39:36 +0000 (0:00:00.199) 0:00:24.036 ***** 2025-09-02 00:39:42.027199 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027210 | orchestrator | 2025-09-02 00:39:42.027222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:42.027233 | orchestrator | Tuesday 02 September 2025 00:39:36 +0000 (0:00:00.188) 0:00:24.224 ***** 2025-09-02 00:39:42.027244 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027301 | orchestrator | 2025-09-02 00:39:42.027315 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-02 00:39:42.027326 | orchestrator | Tuesday 02 September 2025 00:39:36 +0000 (0:00:00.190) 0:00:24.414 ***** 2025-09-02 00:39:42.027337 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-02 00:39:42.027348 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-02 00:39:42.027359 | orchestrator | 2025-09-02 00:39:42.027370 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-02 00:39:42.027381 | orchestrator | Tuesday 02 September 2025 00:39:37 +0000 (0:00:00.388) 0:00:24.803 ***** 2025-09-02 00:39:42.027391 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027402 | orchestrator | 2025-09-02 00:39:42.027413 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-02 00:39:42.027424 | orchestrator | Tuesday 02 September 2025 00:39:37 +0000 (0:00:00.133) 0:00:24.937 ***** 2025-09-02 00:39:42.027435 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027446 | orchestrator | 2025-09-02 00:39:42.027457 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-02 00:39:42.027467 | orchestrator | Tuesday 02 September 2025 00:39:37 +0000 (0:00:00.138) 0:00:25.076 ***** 2025-09-02 00:39:42.027478 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027489 | orchestrator | 2025-09-02 00:39:42.027499 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-02 00:39:42.027510 | orchestrator | Tuesday 02 September 2025 00:39:37 +0000 (0:00:00.131) 0:00:25.207 ***** 2025-09-02 00:39:42.027521 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:39:42.027533 | orchestrator | 2025-09-02 00:39:42.027544 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-02 00:39:42.027554 | orchestrator | Tuesday 02 September 2025 00:39:37 +0000 (0:00:00.131) 0:00:25.339 ***** 2025-09-02 00:39:42.027566 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de858a7c-8c7c-5154-a7df-793b28d7d942'}}) 2025-09-02 00:39:42.027577 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4843a7b7-fb51-5101-86f0-3e9039878e37'}}) 2025-09-02 00:39:42.027588 | orchestrator | 2025-09-02 00:39:42.027599 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-02 00:39:42.027637 | orchestrator | Tuesday 02 September 2025 00:39:37 +0000 (0:00:00.176) 0:00:25.515 ***** 2025-09-02 00:39:42.027649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de858a7c-8c7c-5154-a7df-793b28d7d942'}})  2025-09-02 00:39:42.027661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4843a7b7-fb51-5101-86f0-3e9039878e37'}})  2025-09-02 00:39:42.027671 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027682 | orchestrator | 2025-09-02 00:39:42.027710 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-02 00:39:42.027722 | orchestrator | Tuesday 02 September 2025 00:39:37 +0000 (0:00:00.148) 0:00:25.664 ***** 2025-09-02 00:39:42.027733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de858a7c-8c7c-5154-a7df-793b28d7d942'}})  2025-09-02 00:39:42.027744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4843a7b7-fb51-5101-86f0-3e9039878e37'}})  2025-09-02 00:39:42.027755 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027766 | orchestrator | 2025-09-02 00:39:42.027776 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-02 00:39:42.027787 | orchestrator | Tuesday 02 September 2025 00:39:38 +0000 (0:00:00.168) 0:00:25.833 ***** 2025-09-02 00:39:42.027798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de858a7c-8c7c-5154-a7df-793b28d7d942'}})  2025-09-02 00:39:42.027809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4843a7b7-fb51-5101-86f0-3e9039878e37'}})  2025-09-02 00:39:42.027821 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027832 | orchestrator | 2025-09-02 00:39:42.027842 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-02 00:39:42.027853 | orchestrator | Tuesday 02 September 2025 00:39:38 +0000 (0:00:00.161) 0:00:25.995 ***** 2025-09-02 00:39:42.027864 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:39:42.027875 | orchestrator | 2025-09-02 00:39:42.027885 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-02 00:39:42.027896 | orchestrator | Tuesday 02 September 2025 00:39:38 +0000 (0:00:00.138) 0:00:26.134 ***** 2025-09-02 00:39:42.027907 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:39:42.027918 | orchestrator | 2025-09-02 00:39:42.027928 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-02 00:39:42.027939 | orchestrator | Tuesday 02 September 2025 00:39:38 +0000 (0:00:00.161) 0:00:26.295 ***** 2025-09-02 00:39:42.027950 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.027960 | orchestrator | 2025-09-02 00:39:42.027989 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-02 00:39:42.028001 | orchestrator | Tuesday 02 September 2025 00:39:38 +0000 (0:00:00.139) 0:00:26.435 ***** 2025-09-02 00:39:42.028012 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.028023 | orchestrator | 2025-09-02 00:39:42.028033 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-02 00:39:42.028044 | orchestrator | Tuesday 02 September 2025 00:39:39 +0000 (0:00:00.342) 0:00:26.777 ***** 2025-09-02 00:39:42.028055 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.028066 | orchestrator | 2025-09-02 00:39:42.028077 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-02 00:39:42.028087 | orchestrator | Tuesday 02 September 2025 00:39:39 +0000 (0:00:00.150) 0:00:26.927 ***** 2025-09-02 00:39:42.028098 | orchestrator | ok: [testbed-node-4] => { 2025-09-02 00:39:42.028109 | orchestrator |  "ceph_osd_devices": { 2025-09-02 00:39:42.028120 | orchestrator |  "sdb": { 2025-09-02 00:39:42.028132 | orchestrator |  "osd_lvm_uuid": "de858a7c-8c7c-5154-a7df-793b28d7d942" 2025-09-02 00:39:42.028143 | orchestrator |  }, 2025-09-02 00:39:42.028154 | orchestrator |  "sdc": { 2025-09-02 00:39:42.028172 | orchestrator |  "osd_lvm_uuid": "4843a7b7-fb51-5101-86f0-3e9039878e37" 2025-09-02 00:39:42.028183 | orchestrator |  } 2025-09-02 00:39:42.028194 | orchestrator |  } 2025-09-02 00:39:42.028205 | orchestrator | } 2025-09-02 00:39:42.028216 | orchestrator | 2025-09-02 00:39:42.028227 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-02 00:39:42.028237 | orchestrator | Tuesday 02 September 2025 00:39:39 +0000 (0:00:00.152) 0:00:27.080 ***** 2025-09-02 00:39:42.028248 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.028279 | orchestrator | 2025-09-02 00:39:42.028291 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-02 00:39:42.028301 | orchestrator | Tuesday 02 September 2025 00:39:39 +0000 (0:00:00.145) 0:00:27.225 ***** 2025-09-02 00:39:42.028312 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.028323 | orchestrator | 2025-09-02 00:39:42.028334 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-02 00:39:42.028344 | orchestrator | Tuesday 02 September 2025 00:39:39 +0000 (0:00:00.136) 0:00:27.362 ***** 2025-09-02 00:39:42.028355 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:39:42.028366 | orchestrator | 2025-09-02 00:39:42.028377 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-02 00:39:42.028387 | orchestrator | Tuesday 02 September 2025 00:39:39 +0000 (0:00:00.148) 0:00:27.511 ***** 2025-09-02 00:39:42.028398 | orchestrator | changed: [testbed-node-4] => { 2025-09-02 00:39:42.028409 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-02 00:39:42.028420 | orchestrator |  "ceph_osd_devices": { 2025-09-02 00:39:42.028431 | orchestrator |  "sdb": { 2025-09-02 00:39:42.028442 | orchestrator |  "osd_lvm_uuid": "de858a7c-8c7c-5154-a7df-793b28d7d942" 2025-09-02 00:39:42.028453 | orchestrator |  }, 2025-09-02 00:39:42.028464 | orchestrator |  "sdc": { 2025-09-02 00:39:42.028475 | orchestrator |  "osd_lvm_uuid": "4843a7b7-fb51-5101-86f0-3e9039878e37" 2025-09-02 00:39:42.028486 | orchestrator |  } 2025-09-02 00:39:42.028497 | orchestrator |  }, 2025-09-02 00:39:42.028508 | orchestrator |  "lvm_volumes": [ 2025-09-02 00:39:42.028518 | orchestrator |  { 2025-09-02 00:39:42.028530 | orchestrator |  "data": "osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942", 2025-09-02 00:39:42.028540 | orchestrator |  "data_vg": "ceph-de858a7c-8c7c-5154-a7df-793b28d7d942" 2025-09-02 00:39:42.028551 | orchestrator |  }, 2025-09-02 00:39:42.028562 | orchestrator |  { 2025-09-02 00:39:42.028573 | orchestrator |  "data": "osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37", 2025-09-02 00:39:42.028583 | orchestrator |  "data_vg": "ceph-4843a7b7-fb51-5101-86f0-3e9039878e37" 2025-09-02 00:39:42.028594 | orchestrator |  } 2025-09-02 00:39:42.028605 | orchestrator |  ] 2025-09-02 00:39:42.028616 | orchestrator |  } 2025-09-02 00:39:42.028627 | orchestrator | } 2025-09-02 00:39:42.028637 | orchestrator | 2025-09-02 00:39:42.028648 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-02 00:39:42.028659 | orchestrator | Tuesday 02 September 2025 00:39:40 +0000 (0:00:00.215) 0:00:27.726 ***** 2025-09-02 00:39:42.028670 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-02 00:39:42.028680 | orchestrator | 2025-09-02 00:39:42.028691 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-02 00:39:42.028702 | orchestrator | 2025-09-02 00:39:42.028712 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-02 00:39:42.028723 | orchestrator | Tuesday 02 September 2025 00:39:40 +0000 (0:00:00.848) 0:00:28.575 ***** 2025-09-02 00:39:42.028734 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-02 00:39:42.028745 | orchestrator | 2025-09-02 00:39:42.028755 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-02 00:39:42.028766 | orchestrator | Tuesday 02 September 2025 00:39:41 +0000 (0:00:00.349) 0:00:28.925 ***** 2025-09-02 00:39:42.028784 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:39:42.028796 | orchestrator | 2025-09-02 00:39:42.028812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:42.028823 | orchestrator | Tuesday 02 September 2025 00:39:41 +0000 (0:00:00.446) 0:00:29.371 ***** 2025-09-02 00:39:42.028834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-02 00:39:42.028845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-02 00:39:42.028856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-02 00:39:42.028866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-02 00:39:42.028877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-02 00:39:42.028888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-02 00:39:42.028905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-02 00:39:50.413920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-02 00:39:50.414099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-02 00:39:50.414117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-02 00:39:50.414129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-02 00:39:50.414140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-02 00:39:50.414151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-02 00:39:50.414163 | orchestrator | 2025-09-02 00:39:50.414175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414187 | orchestrator | Tuesday 02 September 2025 00:39:42 +0000 (0:00:00.353) 0:00:29.725 ***** 2025-09-02 00:39:50.414198 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.414209 | orchestrator | 2025-09-02 00:39:50.414220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414231 | orchestrator | Tuesday 02 September 2025 00:39:42 +0000 (0:00:00.223) 0:00:29.948 ***** 2025-09-02 00:39:50.414242 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.414252 | orchestrator | 2025-09-02 00:39:50.414263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414274 | orchestrator | Tuesday 02 September 2025 00:39:42 +0000 (0:00:00.222) 0:00:30.171 ***** 2025-09-02 00:39:50.414285 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.414295 | orchestrator | 2025-09-02 00:39:50.414347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414359 | orchestrator | Tuesday 02 September 2025 00:39:42 +0000 (0:00:00.183) 0:00:30.354 ***** 2025-09-02 00:39:50.414370 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.414380 | orchestrator | 2025-09-02 00:39:50.414391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414402 | orchestrator | Tuesday 02 September 2025 00:39:42 +0000 (0:00:00.190) 0:00:30.545 ***** 2025-09-02 00:39:50.414413 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.414423 | orchestrator | 2025-09-02 00:39:50.414434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414446 | orchestrator | Tuesday 02 September 2025 00:39:43 +0000 (0:00:00.182) 0:00:30.728 ***** 2025-09-02 00:39:50.414459 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.414472 | orchestrator | 2025-09-02 00:39:50.414483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414497 | orchestrator | Tuesday 02 September 2025 00:39:43 +0000 (0:00:00.198) 0:00:30.926 ***** 2025-09-02 00:39:50.414509 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.414545 | orchestrator | 2025-09-02 00:39:50.414558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414571 | orchestrator | Tuesday 02 September 2025 00:39:43 +0000 (0:00:00.186) 0:00:31.113 ***** 2025-09-02 00:39:50.414583 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.414595 | orchestrator | 2025-09-02 00:39:50.414608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414621 | orchestrator | Tuesday 02 September 2025 00:39:43 +0000 (0:00:00.189) 0:00:31.303 ***** 2025-09-02 00:39:50.414635 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5) 2025-09-02 00:39:50.414648 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5) 2025-09-02 00:39:50.414660 | orchestrator | 2025-09-02 00:39:50.414674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414687 | orchestrator | Tuesday 02 September 2025 00:39:44 +0000 (0:00:00.516) 0:00:31.820 ***** 2025-09-02 00:39:50.414700 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb) 2025-09-02 00:39:50.414712 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb) 2025-09-02 00:39:50.414724 | orchestrator | 2025-09-02 00:39:50.414737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414749 | orchestrator | Tuesday 02 September 2025 00:39:44 +0000 (0:00:00.685) 0:00:32.505 ***** 2025-09-02 00:39:50.414762 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6) 2025-09-02 00:39:50.414776 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6) 2025-09-02 00:39:50.414788 | orchestrator | 2025-09-02 00:39:50.414800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414811 | orchestrator | Tuesday 02 September 2025 00:39:45 +0000 (0:00:00.457) 0:00:32.962 ***** 2025-09-02 00:39:50.414822 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70) 2025-09-02 00:39:50.414833 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70) 2025-09-02 00:39:50.414843 | orchestrator | 2025-09-02 00:39:50.414854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:39:50.414865 | orchestrator | Tuesday 02 September 2025 00:39:45 +0000 (0:00:00.474) 0:00:33.437 ***** 2025-09-02 00:39:50.414876 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-02 00:39:50.414887 | orchestrator | 2025-09-02 00:39:50.414897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.414908 | orchestrator | Tuesday 02 September 2025 00:39:46 +0000 (0:00:00.368) 0:00:33.806 ***** 2025-09-02 00:39:50.414938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-02 00:39:50.414949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-02 00:39:50.414960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-02 00:39:50.414971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-02 00:39:50.414982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-02 00:39:50.414992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-02 00:39:50.415020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-02 00:39:50.415031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-02 00:39:50.415043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-02 00:39:50.415063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-02 00:39:50.415073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-02 00:39:50.415084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-02 00:39:50.415095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-02 00:39:50.415106 | orchestrator | 2025-09-02 00:39:50.415117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415128 | orchestrator | Tuesday 02 September 2025 00:39:46 +0000 (0:00:00.415) 0:00:34.221 ***** 2025-09-02 00:39:50.415138 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415149 | orchestrator | 2025-09-02 00:39:50.415160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415171 | orchestrator | Tuesday 02 September 2025 00:39:46 +0000 (0:00:00.215) 0:00:34.437 ***** 2025-09-02 00:39:50.415182 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415193 | orchestrator | 2025-09-02 00:39:50.415204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415215 | orchestrator | Tuesday 02 September 2025 00:39:46 +0000 (0:00:00.225) 0:00:34.662 ***** 2025-09-02 00:39:50.415225 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415236 | orchestrator | 2025-09-02 00:39:50.415252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415263 | orchestrator | Tuesday 02 September 2025 00:39:47 +0000 (0:00:00.221) 0:00:34.884 ***** 2025-09-02 00:39:50.415274 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415284 | orchestrator | 2025-09-02 00:39:50.415295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415323 | orchestrator | Tuesday 02 September 2025 00:39:47 +0000 (0:00:00.226) 0:00:35.110 ***** 2025-09-02 00:39:50.415335 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415345 | orchestrator | 2025-09-02 00:39:50.415356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415367 | orchestrator | Tuesday 02 September 2025 00:39:47 +0000 (0:00:00.191) 0:00:35.301 ***** 2025-09-02 00:39:50.415377 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415388 | orchestrator | 2025-09-02 00:39:50.415399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415410 | orchestrator | Tuesday 02 September 2025 00:39:48 +0000 (0:00:00.703) 0:00:36.004 ***** 2025-09-02 00:39:50.415421 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415432 | orchestrator | 2025-09-02 00:39:50.415442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415453 | orchestrator | Tuesday 02 September 2025 00:39:48 +0000 (0:00:00.222) 0:00:36.227 ***** 2025-09-02 00:39:50.415464 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415475 | orchestrator | 2025-09-02 00:39:50.415485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415496 | orchestrator | Tuesday 02 September 2025 00:39:48 +0000 (0:00:00.249) 0:00:36.476 ***** 2025-09-02 00:39:50.415507 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-02 00:39:50.415518 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-02 00:39:50.415529 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-02 00:39:50.415540 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-02 00:39:50.415551 | orchestrator | 2025-09-02 00:39:50.415562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415572 | orchestrator | Tuesday 02 September 2025 00:39:49 +0000 (0:00:00.719) 0:00:37.196 ***** 2025-09-02 00:39:50.415583 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415594 | orchestrator | 2025-09-02 00:39:50.415605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415623 | orchestrator | Tuesday 02 September 2025 00:39:49 +0000 (0:00:00.293) 0:00:37.489 ***** 2025-09-02 00:39:50.415634 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415645 | orchestrator | 2025-09-02 00:39:50.415655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415666 | orchestrator | Tuesday 02 September 2025 00:39:50 +0000 (0:00:00.224) 0:00:37.714 ***** 2025-09-02 00:39:50.415677 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415688 | orchestrator | 2025-09-02 00:39:50.415699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:39:50.415710 | orchestrator | Tuesday 02 September 2025 00:39:50 +0000 (0:00:00.202) 0:00:37.916 ***** 2025-09-02 00:39:50.415721 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:50.415731 | orchestrator | 2025-09-02 00:39:50.415742 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-02 00:39:50.415760 | orchestrator | Tuesday 02 September 2025 00:39:50 +0000 (0:00:00.197) 0:00:38.114 ***** 2025-09-02 00:39:54.426450 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-02 00:39:54.426556 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-02 00:39:54.426570 | orchestrator | 2025-09-02 00:39:54.426582 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-02 00:39:54.426592 | orchestrator | Tuesday 02 September 2025 00:39:50 +0000 (0:00:00.173) 0:00:38.287 ***** 2025-09-02 00:39:54.426602 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.426612 | orchestrator | 2025-09-02 00:39:54.426622 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-02 00:39:54.426632 | orchestrator | Tuesday 02 September 2025 00:39:50 +0000 (0:00:00.127) 0:00:38.414 ***** 2025-09-02 00:39:54.426641 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.426651 | orchestrator | 2025-09-02 00:39:54.426661 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-02 00:39:54.426670 | orchestrator | Tuesday 02 September 2025 00:39:50 +0000 (0:00:00.180) 0:00:38.595 ***** 2025-09-02 00:39:54.426680 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.426689 | orchestrator | 2025-09-02 00:39:54.426699 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-02 00:39:54.426708 | orchestrator | Tuesday 02 September 2025 00:39:51 +0000 (0:00:00.135) 0:00:38.730 ***** 2025-09-02 00:39:54.426718 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:39:54.426728 | orchestrator | 2025-09-02 00:39:54.426738 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-02 00:39:54.426747 | orchestrator | Tuesday 02 September 2025 00:39:51 +0000 (0:00:00.371) 0:00:39.102 ***** 2025-09-02 00:39:54.426758 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad19e49-f824-57b0-a164-7b3912efd317'}}) 2025-09-02 00:39:54.426768 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '14a05dcf-7776-5f2b-8543-65494bada47a'}}) 2025-09-02 00:39:54.426778 | orchestrator | 2025-09-02 00:39:54.426787 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-02 00:39:54.426797 | orchestrator | Tuesday 02 September 2025 00:39:51 +0000 (0:00:00.203) 0:00:39.306 ***** 2025-09-02 00:39:54.426807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad19e49-f824-57b0-a164-7b3912efd317'}})  2025-09-02 00:39:54.426819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '14a05dcf-7776-5f2b-8543-65494bada47a'}})  2025-09-02 00:39:54.426828 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.426838 | orchestrator | 2025-09-02 00:39:54.426848 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-02 00:39:54.426858 | orchestrator | Tuesday 02 September 2025 00:39:51 +0000 (0:00:00.165) 0:00:39.471 ***** 2025-09-02 00:39:54.426867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad19e49-f824-57b0-a164-7b3912efd317'}})  2025-09-02 00:39:54.426902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '14a05dcf-7776-5f2b-8543-65494bada47a'}})  2025-09-02 00:39:54.426913 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.426923 | orchestrator | 2025-09-02 00:39:54.426932 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-02 00:39:54.426943 | orchestrator | Tuesday 02 September 2025 00:39:51 +0000 (0:00:00.180) 0:00:39.651 ***** 2025-09-02 00:39:54.426954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad19e49-f824-57b0-a164-7b3912efd317'}})  2025-09-02 00:39:54.426982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '14a05dcf-7776-5f2b-8543-65494bada47a'}})  2025-09-02 00:39:54.426994 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.427005 | orchestrator | 2025-09-02 00:39:54.427016 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-02 00:39:54.427027 | orchestrator | Tuesday 02 September 2025 00:39:52 +0000 (0:00:00.195) 0:00:39.847 ***** 2025-09-02 00:39:54.427038 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:39:54.427050 | orchestrator | 2025-09-02 00:39:54.427062 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-02 00:39:54.427073 | orchestrator | Tuesday 02 September 2025 00:39:52 +0000 (0:00:00.162) 0:00:40.009 ***** 2025-09-02 00:39:54.427084 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:39:54.427095 | orchestrator | 2025-09-02 00:39:54.427105 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-02 00:39:54.427114 | orchestrator | Tuesday 02 September 2025 00:39:52 +0000 (0:00:00.122) 0:00:40.132 ***** 2025-09-02 00:39:54.427124 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.427133 | orchestrator | 2025-09-02 00:39:54.427143 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-02 00:39:54.427153 | orchestrator | Tuesday 02 September 2025 00:39:52 +0000 (0:00:00.117) 0:00:40.249 ***** 2025-09-02 00:39:54.427162 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.427172 | orchestrator | 2025-09-02 00:39:54.427182 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-02 00:39:54.427191 | orchestrator | Tuesday 02 September 2025 00:39:52 +0000 (0:00:00.097) 0:00:40.347 ***** 2025-09-02 00:39:54.427201 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.427210 | orchestrator | 2025-09-02 00:39:54.427220 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-02 00:39:54.427230 | orchestrator | Tuesday 02 September 2025 00:39:52 +0000 (0:00:00.095) 0:00:40.442 ***** 2025-09-02 00:39:54.427240 | orchestrator | ok: [testbed-node-5] => { 2025-09-02 00:39:54.427249 | orchestrator |  "ceph_osd_devices": { 2025-09-02 00:39:54.427259 | orchestrator |  "sdb": { 2025-09-02 00:39:54.427269 | orchestrator |  "osd_lvm_uuid": "7ad19e49-f824-57b0-a164-7b3912efd317" 2025-09-02 00:39:54.427298 | orchestrator |  }, 2025-09-02 00:39:54.427308 | orchestrator |  "sdc": { 2025-09-02 00:39:54.427318 | orchestrator |  "osd_lvm_uuid": "14a05dcf-7776-5f2b-8543-65494bada47a" 2025-09-02 00:39:54.427358 | orchestrator |  } 2025-09-02 00:39:54.427369 | orchestrator |  } 2025-09-02 00:39:54.427379 | orchestrator | } 2025-09-02 00:39:54.427389 | orchestrator | 2025-09-02 00:39:54.427399 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-02 00:39:54.427409 | orchestrator | Tuesday 02 September 2025 00:39:52 +0000 (0:00:00.097) 0:00:40.540 ***** 2025-09-02 00:39:54.427418 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.427428 | orchestrator | 2025-09-02 00:39:54.427437 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-02 00:39:54.427447 | orchestrator | Tuesday 02 September 2025 00:39:52 +0000 (0:00:00.093) 0:00:40.633 ***** 2025-09-02 00:39:54.427457 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.427466 | orchestrator | 2025-09-02 00:39:54.427476 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-02 00:39:54.427495 | orchestrator | Tuesday 02 September 2025 00:39:53 +0000 (0:00:00.267) 0:00:40.900 ***** 2025-09-02 00:39:54.427505 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:39:54.427515 | orchestrator | 2025-09-02 00:39:54.427524 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-02 00:39:54.427534 | orchestrator | Tuesday 02 September 2025 00:39:53 +0000 (0:00:00.115) 0:00:41.016 ***** 2025-09-02 00:39:54.427544 | orchestrator | changed: [testbed-node-5] => { 2025-09-02 00:39:54.427553 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-02 00:39:54.427563 | orchestrator |  "ceph_osd_devices": { 2025-09-02 00:39:54.427573 | orchestrator |  "sdb": { 2025-09-02 00:39:54.427583 | orchestrator |  "osd_lvm_uuid": "7ad19e49-f824-57b0-a164-7b3912efd317" 2025-09-02 00:39:54.427592 | orchestrator |  }, 2025-09-02 00:39:54.427602 | orchestrator |  "sdc": { 2025-09-02 00:39:54.427612 | orchestrator |  "osd_lvm_uuid": "14a05dcf-7776-5f2b-8543-65494bada47a" 2025-09-02 00:39:54.427622 | orchestrator |  } 2025-09-02 00:39:54.427631 | orchestrator |  }, 2025-09-02 00:39:54.427641 | orchestrator |  "lvm_volumes": [ 2025-09-02 00:39:54.427651 | orchestrator |  { 2025-09-02 00:39:54.427661 | orchestrator |  "data": "osd-block-7ad19e49-f824-57b0-a164-7b3912efd317", 2025-09-02 00:39:54.427670 | orchestrator |  "data_vg": "ceph-7ad19e49-f824-57b0-a164-7b3912efd317" 2025-09-02 00:39:54.427680 | orchestrator |  }, 2025-09-02 00:39:54.427690 | orchestrator |  { 2025-09-02 00:39:54.427700 | orchestrator |  "data": "osd-block-14a05dcf-7776-5f2b-8543-65494bada47a", 2025-09-02 00:39:54.427709 | orchestrator |  "data_vg": "ceph-14a05dcf-7776-5f2b-8543-65494bada47a" 2025-09-02 00:39:54.427719 | orchestrator |  } 2025-09-02 00:39:54.427729 | orchestrator |  ] 2025-09-02 00:39:54.427739 | orchestrator |  } 2025-09-02 00:39:54.427752 | orchestrator | } 2025-09-02 00:39:54.427762 | orchestrator | 2025-09-02 00:39:54.427772 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-02 00:39:54.427782 | orchestrator | Tuesday 02 September 2025 00:39:53 +0000 (0:00:00.198) 0:00:41.214 ***** 2025-09-02 00:39:54.427791 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-02 00:39:54.427801 | orchestrator | 2025-09-02 00:39:54.427811 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:39:54.427821 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-02 00:39:54.427831 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-02 00:39:54.427841 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-02 00:39:54.427851 | orchestrator | 2025-09-02 00:39:54.427860 | orchestrator | 2025-09-02 00:39:54.427870 | orchestrator | 2025-09-02 00:39:54.427880 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:39:54.427889 | orchestrator | Tuesday 02 September 2025 00:39:54 +0000 (0:00:00.904) 0:00:42.119 ***** 2025-09-02 00:39:54.427899 | orchestrator | =============================================================================== 2025-09-02 00:39:54.427909 | orchestrator | Write configuration file ------------------------------------------------ 3.98s 2025-09-02 00:39:54.427918 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2025-09-02 00:39:54.427928 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2025-09-02 00:39:54.427938 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2025-09-02 00:39:54.427947 | orchestrator | Get initial list of available block devices ----------------------------- 0.92s 2025-09-02 00:39:54.427964 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.86s 2025-09-02 00:39:54.427974 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2025-09-02 00:39:54.427983 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.75s 2025-09-02 00:39:54.427993 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.75s 2025-09-02 00:39:54.428003 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-09-02 00:39:54.428013 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-09-02 00:39:54.428022 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-09-02 00:39:54.428032 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-09-02 00:39:54.428042 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.66s 2025-09-02 00:39:54.428058 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-09-02 00:39:54.736947 | orchestrator | Print configuration data ------------------------------------------------ 0.64s 2025-09-02 00:39:54.737053 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-02 00:39:54.737066 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-02 00:39:54.737077 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-02 00:39:54.737088 | orchestrator | Set WAL devices config data --------------------------------------------- 0.59s 2025-09-02 00:40:17.192964 | orchestrator | 2025-09-02 00:40:17 | INFO  | Task db4a9741-4ad2-4950-93a1-4dc9f9d4dcfc (sync inventory) is running in background. Output coming soon. 2025-09-02 00:40:43.208507 | orchestrator | 2025-09-02 00:40:18 | INFO  | Starting group_vars file reorganization 2025-09-02 00:40:43.208605 | orchestrator | 2025-09-02 00:40:18 | INFO  | Moved 0 file(s) to their respective directories 2025-09-02 00:40:43.208618 | orchestrator | 2025-09-02 00:40:18 | INFO  | Group_vars file reorganization completed 2025-09-02 00:40:43.208625 | orchestrator | 2025-09-02 00:40:21 | INFO  | Starting variable preparation from inventory 2025-09-02 00:40:43.208632 | orchestrator | 2025-09-02 00:40:25 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-02 00:40:43.208639 | orchestrator | 2025-09-02 00:40:25 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-02 00:40:43.208646 | orchestrator | 2025-09-02 00:40:25 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-02 00:40:43.208664 | orchestrator | 2025-09-02 00:40:25 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-02 00:40:43.208672 | orchestrator | 2025-09-02 00:40:25 | INFO  | Variable preparation completed 2025-09-02 00:40:43.208678 | orchestrator | 2025-09-02 00:40:26 | INFO  | Starting inventory overwrite handling 2025-09-02 00:40:43.208685 | orchestrator | 2025-09-02 00:40:26 | INFO  | Handling group overwrites in 99-overwrite 2025-09-02 00:40:43.208693 | orchestrator | 2025-09-02 00:40:26 | INFO  | Removing group frr:children from 60-generic 2025-09-02 00:40:43.208700 | orchestrator | 2025-09-02 00:40:26 | INFO  | Removing group storage:children from 50-kolla 2025-09-02 00:40:43.208706 | orchestrator | 2025-09-02 00:40:26 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-02 00:40:43.208712 | orchestrator | 2025-09-02 00:40:26 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-02 00:40:43.208719 | orchestrator | 2025-09-02 00:40:26 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-02 00:40:43.208725 | orchestrator | 2025-09-02 00:40:26 | INFO  | Handling group overwrites in 20-roles 2025-09-02 00:40:43.208732 | orchestrator | 2025-09-02 00:40:26 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-02 00:40:43.208752 | orchestrator | 2025-09-02 00:40:26 | INFO  | Removed 6 group(s) in total 2025-09-02 00:40:43.208759 | orchestrator | 2025-09-02 00:40:26 | INFO  | Inventory overwrite handling completed 2025-09-02 00:40:43.208766 | orchestrator | 2025-09-02 00:40:27 | INFO  | Starting merge of inventory files 2025-09-02 00:40:43.208772 | orchestrator | 2025-09-02 00:40:27 | INFO  | Inventory files merged successfully 2025-09-02 00:40:43.208778 | orchestrator | 2025-09-02 00:40:30 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-02 00:40:43.208784 | orchestrator | 2025-09-02 00:40:41 | INFO  | Successfully wrote ClusterShell configuration 2025-09-02 00:40:43.208791 | orchestrator | [master 837135c] 2025-09-02-00-40 2025-09-02 00:40:43.208798 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-02 00:40:45.446188 | orchestrator | 2025-09-02 00:40:45 | INFO  | Task 939a74a3-32c5-4424-9379-27f196194602 (ceph-create-lvm-devices) was prepared for execution. 2025-09-02 00:40:45.446461 | orchestrator | 2025-09-02 00:40:45 | INFO  | It takes a moment until task 939a74a3-32c5-4424-9379-27f196194602 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-02 00:40:57.709289 | orchestrator | 2025-09-02 00:40:57.709381 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-02 00:40:57.709394 | orchestrator | 2025-09-02 00:40:57.709405 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-02 00:40:57.709420 | orchestrator | Tuesday 02 September 2025 00:40:49 +0000 (0:00:00.310) 0:00:00.310 ***** 2025-09-02 00:40:57.710092 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-02 00:40:57.710112 | orchestrator | 2025-09-02 00:40:57.710122 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-02 00:40:57.710132 | orchestrator | Tuesday 02 September 2025 00:40:50 +0000 (0:00:00.229) 0:00:00.539 ***** 2025-09-02 00:40:57.710141 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:40:57.710151 | orchestrator | 2025-09-02 00:40:57.710160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710169 | orchestrator | Tuesday 02 September 2025 00:40:50 +0000 (0:00:00.225) 0:00:00.765 ***** 2025-09-02 00:40:57.710179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-02 00:40:57.710206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-02 00:40:57.710223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-02 00:40:57.710241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-02 00:40:57.710249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-02 00:40:57.710257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-02 00:40:57.710265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-02 00:40:57.710273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-02 00:40:57.710281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-02 00:40:57.710289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-02 00:40:57.710297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-02 00:40:57.710305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-02 00:40:57.710313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-02 00:40:57.710321 | orchestrator | 2025-09-02 00:40:57.710329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710357 | orchestrator | Tuesday 02 September 2025 00:40:50 +0000 (0:00:00.410) 0:00:01.175 ***** 2025-09-02 00:40:57.710365 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.710373 | orchestrator | 2025-09-02 00:40:57.710381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710389 | orchestrator | Tuesday 02 September 2025 00:40:51 +0000 (0:00:00.513) 0:00:01.689 ***** 2025-09-02 00:40:57.710397 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.710405 | orchestrator | 2025-09-02 00:40:57.710413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710421 | orchestrator | Tuesday 02 September 2025 00:40:51 +0000 (0:00:00.198) 0:00:01.887 ***** 2025-09-02 00:40:57.710429 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.710437 | orchestrator | 2025-09-02 00:40:57.710445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710453 | orchestrator | Tuesday 02 September 2025 00:40:51 +0000 (0:00:00.204) 0:00:02.092 ***** 2025-09-02 00:40:57.710460 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.710468 | orchestrator | 2025-09-02 00:40:57.710476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710484 | orchestrator | Tuesday 02 September 2025 00:40:51 +0000 (0:00:00.215) 0:00:02.308 ***** 2025-09-02 00:40:57.710492 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.710500 | orchestrator | 2025-09-02 00:40:57.710508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710515 | orchestrator | Tuesday 02 September 2025 00:40:52 +0000 (0:00:00.209) 0:00:02.517 ***** 2025-09-02 00:40:57.710523 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.710531 | orchestrator | 2025-09-02 00:40:57.710539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710547 | orchestrator | Tuesday 02 September 2025 00:40:52 +0000 (0:00:00.220) 0:00:02.737 ***** 2025-09-02 00:40:57.710555 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.710562 | orchestrator | 2025-09-02 00:40:57.710570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710578 | orchestrator | Tuesday 02 September 2025 00:40:52 +0000 (0:00:00.226) 0:00:02.964 ***** 2025-09-02 00:40:57.710586 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.710594 | orchestrator | 2025-09-02 00:40:57.710602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710610 | orchestrator | Tuesday 02 September 2025 00:40:52 +0000 (0:00:00.206) 0:00:03.170 ***** 2025-09-02 00:40:57.710617 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b) 2025-09-02 00:40:57.710626 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b) 2025-09-02 00:40:57.710634 | orchestrator | 2025-09-02 00:40:57.710669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710677 | orchestrator | Tuesday 02 September 2025 00:40:53 +0000 (0:00:00.419) 0:00:03.590 ***** 2025-09-02 00:40:57.710701 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43) 2025-09-02 00:40:57.710710 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43) 2025-09-02 00:40:57.710718 | orchestrator | 2025-09-02 00:40:57.710726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710734 | orchestrator | Tuesday 02 September 2025 00:40:53 +0000 (0:00:00.410) 0:00:04.000 ***** 2025-09-02 00:40:57.710741 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3) 2025-09-02 00:40:57.710749 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3) 2025-09-02 00:40:57.710757 | orchestrator | 2025-09-02 00:40:57.710765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710779 | orchestrator | Tuesday 02 September 2025 00:40:54 +0000 (0:00:00.665) 0:00:04.665 ***** 2025-09-02 00:40:57.710787 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498) 2025-09-02 00:40:57.710795 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498) 2025-09-02 00:40:57.710802 | orchestrator | 2025-09-02 00:40:57.710810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:40:57.710818 | orchestrator | Tuesday 02 September 2025 00:40:55 +0000 (0:00:00.844) 0:00:05.509 ***** 2025-09-02 00:40:57.710826 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-02 00:40:57.710834 | orchestrator | 2025-09-02 00:40:57.710842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:40:57.710849 | orchestrator | Tuesday 02 September 2025 00:40:55 +0000 (0:00:00.313) 0:00:05.822 ***** 2025-09-02 00:40:57.710857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-02 00:40:57.710865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-02 00:40:57.710873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-02 00:40:57.710880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-02 00:40:57.710900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-02 00:40:57.710908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-02 00:40:57.710916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-02 00:40:57.710924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-02 00:40:57.710932 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-02 00:40:57.710940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-02 00:40:57.710947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-02 00:40:57.710956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-02 00:40:57.710975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-02 00:40:57.710988 | orchestrator | 2025-09-02 00:40:57.711001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:40:57.711014 | orchestrator | Tuesday 02 September 2025 00:40:55 +0000 (0:00:00.481) 0:00:06.304 ***** 2025-09-02 00:40:57.711028 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.711041 | orchestrator | 2025-09-02 00:40:57.711056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:40:57.711068 | orchestrator | Tuesday 02 September 2025 00:40:56 +0000 (0:00:00.226) 0:00:06.530 ***** 2025-09-02 00:40:57.711081 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.711095 | orchestrator | 2025-09-02 00:40:57.711108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:40:57.711121 | orchestrator | Tuesday 02 September 2025 00:40:56 +0000 (0:00:00.203) 0:00:06.734 ***** 2025-09-02 00:40:57.711134 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.711148 | orchestrator | 2025-09-02 00:40:57.711162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:40:57.711175 | orchestrator | Tuesday 02 September 2025 00:40:56 +0000 (0:00:00.236) 0:00:06.971 ***** 2025-09-02 00:40:57.711186 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.711200 | orchestrator | 2025-09-02 00:40:57.711213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:40:57.711235 | orchestrator | Tuesday 02 September 2025 00:40:56 +0000 (0:00:00.228) 0:00:07.200 ***** 2025-09-02 00:40:57.711249 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.711262 | orchestrator | 2025-09-02 00:40:57.711276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:40:57.711289 | orchestrator | Tuesday 02 September 2025 00:40:56 +0000 (0:00:00.216) 0:00:07.416 ***** 2025-09-02 00:40:57.711303 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.711316 | orchestrator | 2025-09-02 00:40:57.711329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:40:57.711342 | orchestrator | Tuesday 02 September 2025 00:40:57 +0000 (0:00:00.219) 0:00:07.635 ***** 2025-09-02 00:40:57.711356 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:40:57.711369 | orchestrator | 2025-09-02 00:40:57.711381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:40:57.711395 | orchestrator | Tuesday 02 September 2025 00:40:57 +0000 (0:00:00.241) 0:00:07.876 ***** 2025-09-02 00:40:57.711416 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.587375 | orchestrator | 2025-09-02 00:41:05.587464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:05.587480 | orchestrator | Tuesday 02 September 2025 00:40:57 +0000 (0:00:00.250) 0:00:08.127 ***** 2025-09-02 00:41:05.587492 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-02 00:41:05.587504 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-02 00:41:05.587515 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-02 00:41:05.587526 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-02 00:41:05.587537 | orchestrator | 2025-09-02 00:41:05.587548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:05.587559 | orchestrator | Tuesday 02 September 2025 00:40:58 +0000 (0:00:01.126) 0:00:09.254 ***** 2025-09-02 00:41:05.587570 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.587580 | orchestrator | 2025-09-02 00:41:05.587591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:05.587603 | orchestrator | Tuesday 02 September 2025 00:40:59 +0000 (0:00:00.200) 0:00:09.454 ***** 2025-09-02 00:41:05.587614 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.587625 | orchestrator | 2025-09-02 00:41:05.587636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:05.587647 | orchestrator | Tuesday 02 September 2025 00:40:59 +0000 (0:00:00.195) 0:00:09.649 ***** 2025-09-02 00:41:05.587658 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.587669 | orchestrator | 2025-09-02 00:41:05.587724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:05.587736 | orchestrator | Tuesday 02 September 2025 00:40:59 +0000 (0:00:00.200) 0:00:09.850 ***** 2025-09-02 00:41:05.587747 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.587758 | orchestrator | 2025-09-02 00:41:05.587769 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-02 00:41:05.587786 | orchestrator | Tuesday 02 September 2025 00:40:59 +0000 (0:00:00.197) 0:00:10.048 ***** 2025-09-02 00:41:05.587805 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.587822 | orchestrator | 2025-09-02 00:41:05.587840 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-02 00:41:05.587858 | orchestrator | Tuesday 02 September 2025 00:40:59 +0000 (0:00:00.132) 0:00:10.180 ***** 2025-09-02 00:41:05.587876 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'}}) 2025-09-02 00:41:05.587895 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '688b3bb6-a638-5f84-8470-ce7969c766cd'}}) 2025-09-02 00:41:05.587914 | orchestrator | 2025-09-02 00:41:05.587932 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-02 00:41:05.587952 | orchestrator | Tuesday 02 September 2025 00:40:59 +0000 (0:00:00.221) 0:00:10.401 ***** 2025-09-02 00:41:05.587968 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'}) 2025-09-02 00:41:05.588005 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'}) 2025-09-02 00:41:05.588019 | orchestrator | 2025-09-02 00:41:05.588032 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-02 00:41:05.588045 | orchestrator | Tuesday 02 September 2025 00:41:02 +0000 (0:00:02.027) 0:00:12.429 ***** 2025-09-02 00:41:05.588058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:05.588072 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:05.588086 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588097 | orchestrator | 2025-09-02 00:41:05.588108 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-02 00:41:05.588119 | orchestrator | Tuesday 02 September 2025 00:41:02 +0000 (0:00:00.147) 0:00:12.576 ***** 2025-09-02 00:41:05.588130 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'}) 2025-09-02 00:41:05.588141 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'}) 2025-09-02 00:41:05.588152 | orchestrator | 2025-09-02 00:41:05.588163 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-02 00:41:05.588174 | orchestrator | Tuesday 02 September 2025 00:41:03 +0000 (0:00:01.450) 0:00:14.027 ***** 2025-09-02 00:41:05.588185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:05.588197 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:05.588209 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588220 | orchestrator | 2025-09-02 00:41:05.588231 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-02 00:41:05.588243 | orchestrator | Tuesday 02 September 2025 00:41:03 +0000 (0:00:00.164) 0:00:14.192 ***** 2025-09-02 00:41:05.588254 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588265 | orchestrator | 2025-09-02 00:41:05.588276 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-02 00:41:05.588304 | orchestrator | Tuesday 02 September 2025 00:41:03 +0000 (0:00:00.117) 0:00:14.310 ***** 2025-09-02 00:41:05.588316 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:05.588328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:05.588339 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588350 | orchestrator | 2025-09-02 00:41:05.588361 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-02 00:41:05.588373 | orchestrator | Tuesday 02 September 2025 00:41:04 +0000 (0:00:00.270) 0:00:14.580 ***** 2025-09-02 00:41:05.588384 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588395 | orchestrator | 2025-09-02 00:41:05.588406 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-02 00:41:05.588417 | orchestrator | Tuesday 02 September 2025 00:41:04 +0000 (0:00:00.128) 0:00:14.708 ***** 2025-09-02 00:41:05.588428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:05.588450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:05.588462 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588473 | orchestrator | 2025-09-02 00:41:05.588484 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-02 00:41:05.588495 | orchestrator | Tuesday 02 September 2025 00:41:04 +0000 (0:00:00.151) 0:00:14.860 ***** 2025-09-02 00:41:05.588506 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588517 | orchestrator | 2025-09-02 00:41:05.588528 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-02 00:41:05.588539 | orchestrator | Tuesday 02 September 2025 00:41:04 +0000 (0:00:00.127) 0:00:14.988 ***** 2025-09-02 00:41:05.588550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:05.588562 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:05.588573 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588584 | orchestrator | 2025-09-02 00:41:05.588595 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-02 00:41:05.588607 | orchestrator | Tuesday 02 September 2025 00:41:04 +0000 (0:00:00.129) 0:00:15.117 ***** 2025-09-02 00:41:05.588618 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:41:05.588629 | orchestrator | 2025-09-02 00:41:05.588640 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-02 00:41:05.588651 | orchestrator | Tuesday 02 September 2025 00:41:04 +0000 (0:00:00.127) 0:00:15.245 ***** 2025-09-02 00:41:05.588714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:05.588728 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:05.588739 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588750 | orchestrator | 2025-09-02 00:41:05.588761 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-02 00:41:05.588772 | orchestrator | Tuesday 02 September 2025 00:41:05 +0000 (0:00:00.228) 0:00:15.474 ***** 2025-09-02 00:41:05.588783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:05.588794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:05.588805 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588816 | orchestrator | 2025-09-02 00:41:05.588827 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-02 00:41:05.588838 | orchestrator | Tuesday 02 September 2025 00:41:05 +0000 (0:00:00.145) 0:00:15.620 ***** 2025-09-02 00:41:05.588849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:05.588860 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:05.588871 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588882 | orchestrator | 2025-09-02 00:41:05.588893 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-02 00:41:05.588904 | orchestrator | Tuesday 02 September 2025 00:41:05 +0000 (0:00:00.137) 0:00:15.757 ***** 2025-09-02 00:41:05.588915 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588933 | orchestrator | 2025-09-02 00:41:05.588944 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-02 00:41:05.588955 | orchestrator | Tuesday 02 September 2025 00:41:05 +0000 (0:00:00.122) 0:00:15.879 ***** 2025-09-02 00:41:05.588966 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:05.588978 | orchestrator | 2025-09-02 00:41:05.588995 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-02 00:41:11.912517 | orchestrator | Tuesday 02 September 2025 00:41:05 +0000 (0:00:00.127) 0:00:16.007 ***** 2025-09-02 00:41:11.912602 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.912618 | orchestrator | 2025-09-02 00:41:11.912630 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-02 00:41:11.912642 | orchestrator | Tuesday 02 September 2025 00:41:05 +0000 (0:00:00.125) 0:00:16.133 ***** 2025-09-02 00:41:11.912653 | orchestrator | ok: [testbed-node-3] => { 2025-09-02 00:41:11.912665 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-02 00:41:11.912676 | orchestrator | } 2025-09-02 00:41:11.912687 | orchestrator | 2025-09-02 00:41:11.912744 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-02 00:41:11.912757 | orchestrator | Tuesday 02 September 2025 00:41:05 +0000 (0:00:00.258) 0:00:16.392 ***** 2025-09-02 00:41:11.912769 | orchestrator | ok: [testbed-node-3] => { 2025-09-02 00:41:11.912780 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-02 00:41:11.912790 | orchestrator | } 2025-09-02 00:41:11.912801 | orchestrator | 2025-09-02 00:41:11.912812 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-02 00:41:11.912823 | orchestrator | Tuesday 02 September 2025 00:41:06 +0000 (0:00:00.122) 0:00:16.514 ***** 2025-09-02 00:41:11.912834 | orchestrator | ok: [testbed-node-3] => { 2025-09-02 00:41:11.912845 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-02 00:41:11.912855 | orchestrator | } 2025-09-02 00:41:11.912867 | orchestrator | 2025-09-02 00:41:11.912879 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-02 00:41:11.912889 | orchestrator | Tuesday 02 September 2025 00:41:06 +0000 (0:00:00.129) 0:00:16.644 ***** 2025-09-02 00:41:11.912900 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:41:11.912911 | orchestrator | 2025-09-02 00:41:11.912922 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-02 00:41:11.912933 | orchestrator | Tuesday 02 September 2025 00:41:06 +0000 (0:00:00.663) 0:00:17.308 ***** 2025-09-02 00:41:11.912944 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:41:11.912955 | orchestrator | 2025-09-02 00:41:11.912965 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-02 00:41:11.912976 | orchestrator | Tuesday 02 September 2025 00:41:07 +0000 (0:00:00.533) 0:00:17.841 ***** 2025-09-02 00:41:11.912987 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:41:11.912997 | orchestrator | 2025-09-02 00:41:11.913008 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-02 00:41:11.913019 | orchestrator | Tuesday 02 September 2025 00:41:07 +0000 (0:00:00.563) 0:00:18.404 ***** 2025-09-02 00:41:11.913030 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:41:11.913040 | orchestrator | 2025-09-02 00:41:11.913051 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-02 00:41:11.913062 | orchestrator | Tuesday 02 September 2025 00:41:08 +0000 (0:00:00.152) 0:00:18.556 ***** 2025-09-02 00:41:11.913075 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913087 | orchestrator | 2025-09-02 00:41:11.913100 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-02 00:41:11.913113 | orchestrator | Tuesday 02 September 2025 00:41:08 +0000 (0:00:00.121) 0:00:18.678 ***** 2025-09-02 00:41:11.913126 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913138 | orchestrator | 2025-09-02 00:41:11.913151 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-02 00:41:11.913163 | orchestrator | Tuesday 02 September 2025 00:41:08 +0000 (0:00:00.136) 0:00:18.814 ***** 2025-09-02 00:41:11.913175 | orchestrator | ok: [testbed-node-3] => { 2025-09-02 00:41:11.913209 | orchestrator |  "vgs_report": { 2025-09-02 00:41:11.913264 | orchestrator |  "vg": [] 2025-09-02 00:41:11.913278 | orchestrator |  } 2025-09-02 00:41:11.913291 | orchestrator | } 2025-09-02 00:41:11.913303 | orchestrator | 2025-09-02 00:41:11.913315 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-02 00:41:11.913328 | orchestrator | Tuesday 02 September 2025 00:41:08 +0000 (0:00:00.147) 0:00:18.962 ***** 2025-09-02 00:41:11.913340 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913353 | orchestrator | 2025-09-02 00:41:11.913366 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-02 00:41:11.913378 | orchestrator | Tuesday 02 September 2025 00:41:08 +0000 (0:00:00.145) 0:00:19.108 ***** 2025-09-02 00:41:11.913391 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913404 | orchestrator | 2025-09-02 00:41:11.913417 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-02 00:41:11.913429 | orchestrator | Tuesday 02 September 2025 00:41:08 +0000 (0:00:00.144) 0:00:19.253 ***** 2025-09-02 00:41:11.913440 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913450 | orchestrator | 2025-09-02 00:41:11.913461 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-02 00:41:11.913472 | orchestrator | Tuesday 02 September 2025 00:41:09 +0000 (0:00:00.342) 0:00:19.595 ***** 2025-09-02 00:41:11.913482 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913493 | orchestrator | 2025-09-02 00:41:11.913504 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-02 00:41:11.913514 | orchestrator | Tuesday 02 September 2025 00:41:09 +0000 (0:00:00.135) 0:00:19.730 ***** 2025-09-02 00:41:11.913525 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913536 | orchestrator | 2025-09-02 00:41:11.913547 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-02 00:41:11.913558 | orchestrator | Tuesday 02 September 2025 00:41:09 +0000 (0:00:00.147) 0:00:19.878 ***** 2025-09-02 00:41:11.913568 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913579 | orchestrator | 2025-09-02 00:41:11.913589 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-02 00:41:11.913600 | orchestrator | Tuesday 02 September 2025 00:41:09 +0000 (0:00:00.137) 0:00:20.015 ***** 2025-09-02 00:41:11.913611 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913621 | orchestrator | 2025-09-02 00:41:11.913632 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-02 00:41:11.913642 | orchestrator | Tuesday 02 September 2025 00:41:09 +0000 (0:00:00.145) 0:00:20.161 ***** 2025-09-02 00:41:11.913653 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913664 | orchestrator | 2025-09-02 00:41:11.913675 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-02 00:41:11.913718 | orchestrator | Tuesday 02 September 2025 00:41:09 +0000 (0:00:00.136) 0:00:20.297 ***** 2025-09-02 00:41:11.913731 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913742 | orchestrator | 2025-09-02 00:41:11.913753 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-02 00:41:11.913763 | orchestrator | Tuesday 02 September 2025 00:41:10 +0000 (0:00:00.154) 0:00:20.451 ***** 2025-09-02 00:41:11.913774 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913785 | orchestrator | 2025-09-02 00:41:11.913795 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-02 00:41:11.913806 | orchestrator | Tuesday 02 September 2025 00:41:10 +0000 (0:00:00.124) 0:00:20.576 ***** 2025-09-02 00:41:11.913817 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913827 | orchestrator | 2025-09-02 00:41:11.913838 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-02 00:41:11.913849 | orchestrator | Tuesday 02 September 2025 00:41:10 +0000 (0:00:00.139) 0:00:20.715 ***** 2025-09-02 00:41:11.913860 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913870 | orchestrator | 2025-09-02 00:41:11.913889 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-02 00:41:11.913900 | orchestrator | Tuesday 02 September 2025 00:41:10 +0000 (0:00:00.151) 0:00:20.866 ***** 2025-09-02 00:41:11.913911 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913922 | orchestrator | 2025-09-02 00:41:11.913932 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-02 00:41:11.913943 | orchestrator | Tuesday 02 September 2025 00:41:10 +0000 (0:00:00.156) 0:00:21.023 ***** 2025-09-02 00:41:11.913954 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.913965 | orchestrator | 2025-09-02 00:41:11.913976 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-02 00:41:11.913986 | orchestrator | Tuesday 02 September 2025 00:41:10 +0000 (0:00:00.135) 0:00:21.158 ***** 2025-09-02 00:41:11.913998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:11.914010 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:11.914069 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.914080 | orchestrator | 2025-09-02 00:41:11.914091 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-02 00:41:11.914102 | orchestrator | Tuesday 02 September 2025 00:41:11 +0000 (0:00:00.358) 0:00:21.517 ***** 2025-09-02 00:41:11.914112 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:11.914123 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:11.914134 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.914145 | orchestrator | 2025-09-02 00:41:11.914155 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-02 00:41:11.914166 | orchestrator | Tuesday 02 September 2025 00:41:11 +0000 (0:00:00.161) 0:00:21.679 ***** 2025-09-02 00:41:11.914177 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:11.914188 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:11.914198 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.914209 | orchestrator | 2025-09-02 00:41:11.914220 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-02 00:41:11.914230 | orchestrator | Tuesday 02 September 2025 00:41:11 +0000 (0:00:00.170) 0:00:21.849 ***** 2025-09-02 00:41:11.914241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:11.914252 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:11.914263 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.914273 | orchestrator | 2025-09-02 00:41:11.914284 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-02 00:41:11.914294 | orchestrator | Tuesday 02 September 2025 00:41:11 +0000 (0:00:00.165) 0:00:22.014 ***** 2025-09-02 00:41:11.914305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:11.914316 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:11.914327 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:11.914344 | orchestrator | 2025-09-02 00:41:11.914354 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-02 00:41:11.914365 | orchestrator | Tuesday 02 September 2025 00:41:11 +0000 (0:00:00.158) 0:00:22.173 ***** 2025-09-02 00:41:11.914383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:11.914401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:17.809433 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:17.809519 | orchestrator | 2025-09-02 00:41:17.809534 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-02 00:41:17.809547 | orchestrator | Tuesday 02 September 2025 00:41:11 +0000 (0:00:00.156) 0:00:22.330 ***** 2025-09-02 00:41:17.809559 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:17.809571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:17.809582 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:17.809593 | orchestrator | 2025-09-02 00:41:17.809604 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-02 00:41:17.809615 | orchestrator | Tuesday 02 September 2025 00:41:12 +0000 (0:00:00.163) 0:00:22.493 ***** 2025-09-02 00:41:17.809626 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:17.809637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:17.809648 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:17.809659 | orchestrator | 2025-09-02 00:41:17.809670 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-02 00:41:17.809681 | orchestrator | Tuesday 02 September 2025 00:41:12 +0000 (0:00:00.179) 0:00:22.673 ***** 2025-09-02 00:41:17.809692 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:41:17.809703 | orchestrator | 2025-09-02 00:41:17.809714 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-02 00:41:17.809748 | orchestrator | Tuesday 02 September 2025 00:41:12 +0000 (0:00:00.525) 0:00:23.199 ***** 2025-09-02 00:41:17.809760 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:41:17.809771 | orchestrator | 2025-09-02 00:41:17.809781 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-02 00:41:17.809792 | orchestrator | Tuesday 02 September 2025 00:41:13 +0000 (0:00:00.550) 0:00:23.750 ***** 2025-09-02 00:41:17.809802 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:41:17.809813 | orchestrator | 2025-09-02 00:41:17.809824 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-02 00:41:17.809834 | orchestrator | Tuesday 02 September 2025 00:41:13 +0000 (0:00:00.137) 0:00:23.888 ***** 2025-09-02 00:41:17.809845 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'vg_name': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'}) 2025-09-02 00:41:17.809857 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'vg_name': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'}) 2025-09-02 00:41:17.809867 | orchestrator | 2025-09-02 00:41:17.809893 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-02 00:41:17.809904 | orchestrator | Tuesday 02 September 2025 00:41:13 +0000 (0:00:00.183) 0:00:24.072 ***** 2025-09-02 00:41:17.809915 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:17.809945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:17.809957 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:17.809967 | orchestrator | 2025-09-02 00:41:17.809978 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-02 00:41:17.809989 | orchestrator | Tuesday 02 September 2025 00:41:14 +0000 (0:00:00.365) 0:00:24.438 ***** 2025-09-02 00:41:17.810003 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:17.810065 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:17.810079 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:17.810091 | orchestrator | 2025-09-02 00:41:17.810105 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-02 00:41:17.810117 | orchestrator | Tuesday 02 September 2025 00:41:14 +0000 (0:00:00.165) 0:00:24.603 ***** 2025-09-02 00:41:17.810131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'})  2025-09-02 00:41:17.810144 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'})  2025-09-02 00:41:17.810156 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:41:17.810168 | orchestrator | 2025-09-02 00:41:17.810181 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-02 00:41:17.810194 | orchestrator | Tuesday 02 September 2025 00:41:14 +0000 (0:00:00.155) 0:00:24.759 ***** 2025-09-02 00:41:17.810206 | orchestrator | ok: [testbed-node-3] => { 2025-09-02 00:41:17.810219 | orchestrator |  "lvm_report": { 2025-09-02 00:41:17.810233 | orchestrator |  "lv": [ 2025-09-02 00:41:17.810245 | orchestrator |  { 2025-09-02 00:41:17.810275 | orchestrator |  "lv_name": "osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c", 2025-09-02 00:41:17.810289 | orchestrator |  "vg_name": "ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c" 2025-09-02 00:41:17.810301 | orchestrator |  }, 2025-09-02 00:41:17.810314 | orchestrator |  { 2025-09-02 00:41:17.810327 | orchestrator |  "lv_name": "osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd", 2025-09-02 00:41:17.810340 | orchestrator |  "vg_name": "ceph-688b3bb6-a638-5f84-8470-ce7969c766cd" 2025-09-02 00:41:17.810353 | orchestrator |  } 2025-09-02 00:41:17.810364 | orchestrator |  ], 2025-09-02 00:41:17.810375 | orchestrator |  "pv": [ 2025-09-02 00:41:17.810386 | orchestrator |  { 2025-09-02 00:41:17.810397 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-02 00:41:17.810408 | orchestrator |  "vg_name": "ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c" 2025-09-02 00:41:17.810419 | orchestrator |  }, 2025-09-02 00:41:17.810429 | orchestrator |  { 2025-09-02 00:41:17.810440 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-02 00:41:17.810451 | orchestrator |  "vg_name": "ceph-688b3bb6-a638-5f84-8470-ce7969c766cd" 2025-09-02 00:41:17.810462 | orchestrator |  } 2025-09-02 00:41:17.810473 | orchestrator |  ] 2025-09-02 00:41:17.810484 | orchestrator |  } 2025-09-02 00:41:17.810495 | orchestrator | } 2025-09-02 00:41:17.810507 | orchestrator | 2025-09-02 00:41:17.810518 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-02 00:41:17.810529 | orchestrator | 2025-09-02 00:41:17.810540 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-02 00:41:17.810551 | orchestrator | Tuesday 02 September 2025 00:41:14 +0000 (0:00:00.311) 0:00:25.071 ***** 2025-09-02 00:41:17.810562 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-02 00:41:17.810581 | orchestrator | 2025-09-02 00:41:17.810592 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-02 00:41:17.810603 | orchestrator | Tuesday 02 September 2025 00:41:14 +0000 (0:00:00.256) 0:00:25.327 ***** 2025-09-02 00:41:17.810613 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:41:17.810624 | orchestrator | 2025-09-02 00:41:17.810635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:17.810646 | orchestrator | Tuesday 02 September 2025 00:41:15 +0000 (0:00:00.229) 0:00:25.557 ***** 2025-09-02 00:41:17.810657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-02 00:41:17.810668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-02 00:41:17.810679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-02 00:41:17.810689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-02 00:41:17.810700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-02 00:41:17.810711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-02 00:41:17.810722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-02 00:41:17.810768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-02 00:41:17.810780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-02 00:41:17.810791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-02 00:41:17.810802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-02 00:41:17.810812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-02 00:41:17.810823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-02 00:41:17.810834 | orchestrator | 2025-09-02 00:41:17.810845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:17.810855 | orchestrator | Tuesday 02 September 2025 00:41:15 +0000 (0:00:00.434) 0:00:25.991 ***** 2025-09-02 00:41:17.810866 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:17.810876 | orchestrator | 2025-09-02 00:41:17.810887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:17.810898 | orchestrator | Tuesday 02 September 2025 00:41:15 +0000 (0:00:00.221) 0:00:26.213 ***** 2025-09-02 00:41:17.810909 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:17.810920 | orchestrator | 2025-09-02 00:41:17.810931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:17.810941 | orchestrator | Tuesday 02 September 2025 00:41:16 +0000 (0:00:00.262) 0:00:26.475 ***** 2025-09-02 00:41:17.810952 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:17.810962 | orchestrator | 2025-09-02 00:41:17.810973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:17.810984 | orchestrator | Tuesday 02 September 2025 00:41:16 +0000 (0:00:00.812) 0:00:27.288 ***** 2025-09-02 00:41:17.810995 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:17.811005 | orchestrator | 2025-09-02 00:41:17.811016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:17.811027 | orchestrator | Tuesday 02 September 2025 00:41:17 +0000 (0:00:00.276) 0:00:27.565 ***** 2025-09-02 00:41:17.811038 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:17.811048 | orchestrator | 2025-09-02 00:41:17.811059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:17.811070 | orchestrator | Tuesday 02 September 2025 00:41:17 +0000 (0:00:00.187) 0:00:27.752 ***** 2025-09-02 00:41:17.811080 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:17.811091 | orchestrator | 2025-09-02 00:41:17.811108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:17.811119 | orchestrator | Tuesday 02 September 2025 00:41:17 +0000 (0:00:00.274) 0:00:28.027 ***** 2025-09-02 00:41:17.811130 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:17.811141 | orchestrator | 2025-09-02 00:41:17.811159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:28.498617 | orchestrator | Tuesday 02 September 2025 00:41:17 +0000 (0:00:00.201) 0:00:28.228 ***** 2025-09-02 00:41:28.498725 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.498743 | orchestrator | 2025-09-02 00:41:28.498756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:28.498768 | orchestrator | Tuesday 02 September 2025 00:41:18 +0000 (0:00:00.240) 0:00:28.469 ***** 2025-09-02 00:41:28.498871 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7) 2025-09-02 00:41:28.498887 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7) 2025-09-02 00:41:28.498898 | orchestrator | 2025-09-02 00:41:28.498910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:28.498921 | orchestrator | Tuesday 02 September 2025 00:41:18 +0000 (0:00:00.431) 0:00:28.901 ***** 2025-09-02 00:41:28.498931 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd) 2025-09-02 00:41:28.498943 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd) 2025-09-02 00:41:28.498954 | orchestrator | 2025-09-02 00:41:28.498965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:28.498976 | orchestrator | Tuesday 02 September 2025 00:41:18 +0000 (0:00:00.412) 0:00:29.313 ***** 2025-09-02 00:41:28.498987 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a) 2025-09-02 00:41:28.498998 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a) 2025-09-02 00:41:28.499009 | orchestrator | 2025-09-02 00:41:28.499020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:28.499031 | orchestrator | Tuesday 02 September 2025 00:41:19 +0000 (0:00:00.406) 0:00:29.719 ***** 2025-09-02 00:41:28.499042 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e) 2025-09-02 00:41:28.499053 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e) 2025-09-02 00:41:28.499064 | orchestrator | 2025-09-02 00:41:28.499075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:28.499086 | orchestrator | Tuesday 02 September 2025 00:41:19 +0000 (0:00:00.456) 0:00:30.176 ***** 2025-09-02 00:41:28.499097 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-02 00:41:28.499108 | orchestrator | 2025-09-02 00:41:28.499119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499130 | orchestrator | Tuesday 02 September 2025 00:41:20 +0000 (0:00:00.350) 0:00:30.527 ***** 2025-09-02 00:41:28.499141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-02 00:41:28.499155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-02 00:41:28.499168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-02 00:41:28.499181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-02 00:41:28.499193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-02 00:41:28.499205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-02 00:41:28.499233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-02 00:41:28.499264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-02 00:41:28.499277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-02 00:41:28.499290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-02 00:41:28.499302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-02 00:41:28.499314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-02 00:41:28.499326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-02 00:41:28.499339 | orchestrator | 2025-09-02 00:41:28.499351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499384 | orchestrator | Tuesday 02 September 2025 00:41:20 +0000 (0:00:00.644) 0:00:31.171 ***** 2025-09-02 00:41:28.499397 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499410 | orchestrator | 2025-09-02 00:41:28.499422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499435 | orchestrator | Tuesday 02 September 2025 00:41:20 +0000 (0:00:00.218) 0:00:31.389 ***** 2025-09-02 00:41:28.499447 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499460 | orchestrator | 2025-09-02 00:41:28.499473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499485 | orchestrator | Tuesday 02 September 2025 00:41:21 +0000 (0:00:00.211) 0:00:31.601 ***** 2025-09-02 00:41:28.499497 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499510 | orchestrator | 2025-09-02 00:41:28.499522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499534 | orchestrator | Tuesday 02 September 2025 00:41:21 +0000 (0:00:00.207) 0:00:31.808 ***** 2025-09-02 00:41:28.499545 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499556 | orchestrator | 2025-09-02 00:41:28.499585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499597 | orchestrator | Tuesday 02 September 2025 00:41:21 +0000 (0:00:00.212) 0:00:32.020 ***** 2025-09-02 00:41:28.499608 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499619 | orchestrator | 2025-09-02 00:41:28.499630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499641 | orchestrator | Tuesday 02 September 2025 00:41:21 +0000 (0:00:00.216) 0:00:32.237 ***** 2025-09-02 00:41:28.499652 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499663 | orchestrator | 2025-09-02 00:41:28.499674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499685 | orchestrator | Tuesday 02 September 2025 00:41:22 +0000 (0:00:00.218) 0:00:32.455 ***** 2025-09-02 00:41:28.499696 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499707 | orchestrator | 2025-09-02 00:41:28.499718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499728 | orchestrator | Tuesday 02 September 2025 00:41:22 +0000 (0:00:00.260) 0:00:32.716 ***** 2025-09-02 00:41:28.499739 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499750 | orchestrator | 2025-09-02 00:41:28.499761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499793 | orchestrator | Tuesday 02 September 2025 00:41:22 +0000 (0:00:00.224) 0:00:32.941 ***** 2025-09-02 00:41:28.499806 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-02 00:41:28.499817 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-02 00:41:28.499827 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-02 00:41:28.499838 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-02 00:41:28.499849 | orchestrator | 2025-09-02 00:41:28.499861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499872 | orchestrator | Tuesday 02 September 2025 00:41:23 +0000 (0:00:00.857) 0:00:33.799 ***** 2025-09-02 00:41:28.499891 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499902 | orchestrator | 2025-09-02 00:41:28.499913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499924 | orchestrator | Tuesday 02 September 2025 00:41:23 +0000 (0:00:00.214) 0:00:34.013 ***** 2025-09-02 00:41:28.499935 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499945 | orchestrator | 2025-09-02 00:41:28.499957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.499967 | orchestrator | Tuesday 02 September 2025 00:41:23 +0000 (0:00:00.178) 0:00:34.192 ***** 2025-09-02 00:41:28.499978 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.499989 | orchestrator | 2025-09-02 00:41:28.500000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:28.500011 | orchestrator | Tuesday 02 September 2025 00:41:24 +0000 (0:00:00.656) 0:00:34.849 ***** 2025-09-02 00:41:28.500022 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.500033 | orchestrator | 2025-09-02 00:41:28.500044 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-02 00:41:28.500055 | orchestrator | Tuesday 02 September 2025 00:41:24 +0000 (0:00:00.240) 0:00:35.090 ***** 2025-09-02 00:41:28.500071 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.500082 | orchestrator | 2025-09-02 00:41:28.500093 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-02 00:41:28.500104 | orchestrator | Tuesday 02 September 2025 00:41:24 +0000 (0:00:00.130) 0:00:35.220 ***** 2025-09-02 00:41:28.500115 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de858a7c-8c7c-5154-a7df-793b28d7d942'}}) 2025-09-02 00:41:28.500127 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4843a7b7-fb51-5101-86f0-3e9039878e37'}}) 2025-09-02 00:41:28.500138 | orchestrator | 2025-09-02 00:41:28.500149 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-02 00:41:28.500160 | orchestrator | Tuesday 02 September 2025 00:41:25 +0000 (0:00:00.211) 0:00:35.432 ***** 2025-09-02 00:41:28.500172 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'}) 2025-09-02 00:41:28.500183 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'}) 2025-09-02 00:41:28.500194 | orchestrator | 2025-09-02 00:41:28.500205 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-02 00:41:28.500216 | orchestrator | Tuesday 02 September 2025 00:41:26 +0000 (0:00:01.934) 0:00:37.366 ***** 2025-09-02 00:41:28.500227 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:28.500239 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:28.500250 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:28.500261 | orchestrator | 2025-09-02 00:41:28.500272 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-02 00:41:28.500283 | orchestrator | Tuesday 02 September 2025 00:41:27 +0000 (0:00:00.153) 0:00:37.520 ***** 2025-09-02 00:41:28.500294 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'}) 2025-09-02 00:41:28.500305 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'}) 2025-09-02 00:41:28.500316 | orchestrator | 2025-09-02 00:41:28.500334 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-02 00:41:34.203193 | orchestrator | Tuesday 02 September 2025 00:41:28 +0000 (0:00:01.390) 0:00:38.911 ***** 2025-09-02 00:41:34.203331 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:34.203349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:34.203361 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.203373 | orchestrator | 2025-09-02 00:41:34.203385 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-02 00:41:34.203397 | orchestrator | Tuesday 02 September 2025 00:41:28 +0000 (0:00:00.169) 0:00:39.080 ***** 2025-09-02 00:41:34.203408 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.203418 | orchestrator | 2025-09-02 00:41:34.203430 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-02 00:41:34.203441 | orchestrator | Tuesday 02 September 2025 00:41:28 +0000 (0:00:00.153) 0:00:39.234 ***** 2025-09-02 00:41:34.203452 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:34.203463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:34.203474 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.203486 | orchestrator | 2025-09-02 00:41:34.203496 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-02 00:41:34.203507 | orchestrator | Tuesday 02 September 2025 00:41:28 +0000 (0:00:00.186) 0:00:39.421 ***** 2025-09-02 00:41:34.203518 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.203529 | orchestrator | 2025-09-02 00:41:34.203540 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-02 00:41:34.203551 | orchestrator | Tuesday 02 September 2025 00:41:29 +0000 (0:00:00.139) 0:00:39.560 ***** 2025-09-02 00:41:34.203562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:34.203573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:34.203584 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.203595 | orchestrator | 2025-09-02 00:41:34.203606 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-02 00:41:34.203617 | orchestrator | Tuesday 02 September 2025 00:41:29 +0000 (0:00:00.137) 0:00:39.697 ***** 2025-09-02 00:41:34.203642 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.203654 | orchestrator | 2025-09-02 00:41:34.203665 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-02 00:41:34.203676 | orchestrator | Tuesday 02 September 2025 00:41:29 +0000 (0:00:00.346) 0:00:40.043 ***** 2025-09-02 00:41:34.203687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:34.203698 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:34.203709 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.203722 | orchestrator | 2025-09-02 00:41:34.203735 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-02 00:41:34.203747 | orchestrator | Tuesday 02 September 2025 00:41:29 +0000 (0:00:00.140) 0:00:40.183 ***** 2025-09-02 00:41:34.203784 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:41:34.203851 | orchestrator | 2025-09-02 00:41:34.203872 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-02 00:41:34.203889 | orchestrator | Tuesday 02 September 2025 00:41:29 +0000 (0:00:00.144) 0:00:40.328 ***** 2025-09-02 00:41:34.203920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:34.203938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:34.203953 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.203969 | orchestrator | 2025-09-02 00:41:34.203987 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-02 00:41:34.204005 | orchestrator | Tuesday 02 September 2025 00:41:30 +0000 (0:00:00.160) 0:00:40.488 ***** 2025-09-02 00:41:34.204022 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:34.204040 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:34.204059 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.204078 | orchestrator | 2025-09-02 00:41:34.204096 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-02 00:41:34.204114 | orchestrator | Tuesday 02 September 2025 00:41:30 +0000 (0:00:00.156) 0:00:40.645 ***** 2025-09-02 00:41:34.204152 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:34.204164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:34.204175 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.204185 | orchestrator | 2025-09-02 00:41:34.204196 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-02 00:41:34.204207 | orchestrator | Tuesday 02 September 2025 00:41:30 +0000 (0:00:00.162) 0:00:40.808 ***** 2025-09-02 00:41:34.204218 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.204228 | orchestrator | 2025-09-02 00:41:34.204239 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-02 00:41:34.204249 | orchestrator | Tuesday 02 September 2025 00:41:30 +0000 (0:00:00.144) 0:00:40.952 ***** 2025-09-02 00:41:34.204260 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.204270 | orchestrator | 2025-09-02 00:41:34.204281 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-02 00:41:34.204292 | orchestrator | Tuesday 02 September 2025 00:41:30 +0000 (0:00:00.128) 0:00:41.080 ***** 2025-09-02 00:41:34.204302 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.204313 | orchestrator | 2025-09-02 00:41:34.204323 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-02 00:41:34.204334 | orchestrator | Tuesday 02 September 2025 00:41:30 +0000 (0:00:00.136) 0:00:41.217 ***** 2025-09-02 00:41:34.204345 | orchestrator | ok: [testbed-node-4] => { 2025-09-02 00:41:34.204355 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-02 00:41:34.204366 | orchestrator | } 2025-09-02 00:41:34.204377 | orchestrator | 2025-09-02 00:41:34.204388 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-02 00:41:34.204399 | orchestrator | Tuesday 02 September 2025 00:41:30 +0000 (0:00:00.152) 0:00:41.369 ***** 2025-09-02 00:41:34.204409 | orchestrator | ok: [testbed-node-4] => { 2025-09-02 00:41:34.204420 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-02 00:41:34.204430 | orchestrator | } 2025-09-02 00:41:34.204441 | orchestrator | 2025-09-02 00:41:34.204452 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-02 00:41:34.204462 | orchestrator | Tuesday 02 September 2025 00:41:31 +0000 (0:00:00.150) 0:00:41.520 ***** 2025-09-02 00:41:34.204473 | orchestrator | ok: [testbed-node-4] => { 2025-09-02 00:41:34.204484 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-02 00:41:34.204503 | orchestrator | } 2025-09-02 00:41:34.204514 | orchestrator | 2025-09-02 00:41:34.204525 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-02 00:41:34.204535 | orchestrator | Tuesday 02 September 2025 00:41:31 +0000 (0:00:00.162) 0:00:41.682 ***** 2025-09-02 00:41:34.204546 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:41:34.204557 | orchestrator | 2025-09-02 00:41:34.204568 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-02 00:41:34.204578 | orchestrator | Tuesday 02 September 2025 00:41:31 +0000 (0:00:00.721) 0:00:42.404 ***** 2025-09-02 00:41:34.204590 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:41:34.204600 | orchestrator | 2025-09-02 00:41:34.204611 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-02 00:41:34.204622 | orchestrator | Tuesday 02 September 2025 00:41:32 +0000 (0:00:00.515) 0:00:42.920 ***** 2025-09-02 00:41:34.204633 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:41:34.204644 | orchestrator | 2025-09-02 00:41:34.204654 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-02 00:41:34.204665 | orchestrator | Tuesday 02 September 2025 00:41:33 +0000 (0:00:00.546) 0:00:43.466 ***** 2025-09-02 00:41:34.204675 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:41:34.204686 | orchestrator | 2025-09-02 00:41:34.204697 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-02 00:41:34.204707 | orchestrator | Tuesday 02 September 2025 00:41:33 +0000 (0:00:00.150) 0:00:43.616 ***** 2025-09-02 00:41:34.204718 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.204729 | orchestrator | 2025-09-02 00:41:34.204739 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-02 00:41:34.204750 | orchestrator | Tuesday 02 September 2025 00:41:33 +0000 (0:00:00.117) 0:00:43.734 ***** 2025-09-02 00:41:34.204771 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.204782 | orchestrator | 2025-09-02 00:41:34.204819 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-02 00:41:34.204830 | orchestrator | Tuesday 02 September 2025 00:41:33 +0000 (0:00:00.113) 0:00:43.848 ***** 2025-09-02 00:41:34.204841 | orchestrator | ok: [testbed-node-4] => { 2025-09-02 00:41:34.204852 | orchestrator |  "vgs_report": { 2025-09-02 00:41:34.204864 | orchestrator |  "vg": [] 2025-09-02 00:41:34.204876 | orchestrator |  } 2025-09-02 00:41:34.204887 | orchestrator | } 2025-09-02 00:41:34.204898 | orchestrator | 2025-09-02 00:41:34.204909 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-02 00:41:34.204920 | orchestrator | Tuesday 02 September 2025 00:41:33 +0000 (0:00:00.201) 0:00:44.049 ***** 2025-09-02 00:41:34.204931 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.204941 | orchestrator | 2025-09-02 00:41:34.204952 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-02 00:41:34.204963 | orchestrator | Tuesday 02 September 2025 00:41:33 +0000 (0:00:00.141) 0:00:44.190 ***** 2025-09-02 00:41:34.204974 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.204985 | orchestrator | 2025-09-02 00:41:34.204996 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-02 00:41:34.205007 | orchestrator | Tuesday 02 September 2025 00:41:33 +0000 (0:00:00.128) 0:00:44.318 ***** 2025-09-02 00:41:34.205018 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.205029 | orchestrator | 2025-09-02 00:41:34.205040 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-02 00:41:34.205051 | orchestrator | Tuesday 02 September 2025 00:41:34 +0000 (0:00:00.147) 0:00:44.466 ***** 2025-09-02 00:41:34.205062 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:34.205073 | orchestrator | 2025-09-02 00:41:34.205084 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-02 00:41:34.205101 | orchestrator | Tuesday 02 September 2025 00:41:34 +0000 (0:00:00.156) 0:00:44.622 ***** 2025-09-02 00:41:39.072100 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072203 | orchestrator | 2025-09-02 00:41:39.072244 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-02 00:41:39.072257 | orchestrator | Tuesday 02 September 2025 00:41:34 +0000 (0:00:00.135) 0:00:44.758 ***** 2025-09-02 00:41:39.072268 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072279 | orchestrator | 2025-09-02 00:41:39.072290 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-02 00:41:39.072302 | orchestrator | Tuesday 02 September 2025 00:41:34 +0000 (0:00:00.418) 0:00:45.176 ***** 2025-09-02 00:41:39.072312 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072323 | orchestrator | 2025-09-02 00:41:39.072334 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-02 00:41:39.072344 | orchestrator | Tuesday 02 September 2025 00:41:34 +0000 (0:00:00.139) 0:00:45.315 ***** 2025-09-02 00:41:39.072355 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072366 | orchestrator | 2025-09-02 00:41:39.072377 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-02 00:41:39.072387 | orchestrator | Tuesday 02 September 2025 00:41:35 +0000 (0:00:00.137) 0:00:45.453 ***** 2025-09-02 00:41:39.072398 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072409 | orchestrator | 2025-09-02 00:41:39.072420 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-02 00:41:39.072431 | orchestrator | Tuesday 02 September 2025 00:41:35 +0000 (0:00:00.141) 0:00:45.594 ***** 2025-09-02 00:41:39.072441 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072452 | orchestrator | 2025-09-02 00:41:39.072463 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-02 00:41:39.072473 | orchestrator | Tuesday 02 September 2025 00:41:35 +0000 (0:00:00.134) 0:00:45.729 ***** 2025-09-02 00:41:39.072484 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072495 | orchestrator | 2025-09-02 00:41:39.072506 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-02 00:41:39.072516 | orchestrator | Tuesday 02 September 2025 00:41:35 +0000 (0:00:00.148) 0:00:45.877 ***** 2025-09-02 00:41:39.072527 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072538 | orchestrator | 2025-09-02 00:41:39.072549 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-02 00:41:39.072560 | orchestrator | Tuesday 02 September 2025 00:41:35 +0000 (0:00:00.151) 0:00:46.029 ***** 2025-09-02 00:41:39.072570 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072581 | orchestrator | 2025-09-02 00:41:39.072592 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-02 00:41:39.072602 | orchestrator | Tuesday 02 September 2025 00:41:35 +0000 (0:00:00.129) 0:00:46.159 ***** 2025-09-02 00:41:39.072613 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072624 | orchestrator | 2025-09-02 00:41:39.072637 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-02 00:41:39.072650 | orchestrator | Tuesday 02 September 2025 00:41:35 +0000 (0:00:00.156) 0:00:46.315 ***** 2025-09-02 00:41:39.072680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.072696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.072709 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072721 | orchestrator | 2025-09-02 00:41:39.072734 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-02 00:41:39.072747 | orchestrator | Tuesday 02 September 2025 00:41:36 +0000 (0:00:00.172) 0:00:46.488 ***** 2025-09-02 00:41:39.072760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.072772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.072793 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072806 | orchestrator | 2025-09-02 00:41:39.072841 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-02 00:41:39.072855 | orchestrator | Tuesday 02 September 2025 00:41:36 +0000 (0:00:00.177) 0:00:46.666 ***** 2025-09-02 00:41:39.072867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.072880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.072893 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072906 | orchestrator | 2025-09-02 00:41:39.072919 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-02 00:41:39.072931 | orchestrator | Tuesday 02 September 2025 00:41:36 +0000 (0:00:00.153) 0:00:46.819 ***** 2025-09-02 00:41:39.072944 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.072957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.072970 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.072982 | orchestrator | 2025-09-02 00:41:39.072995 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-02 00:41:39.073023 | orchestrator | Tuesday 02 September 2025 00:41:36 +0000 (0:00:00.366) 0:00:47.185 ***** 2025-09-02 00:41:39.073035 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.073046 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.073057 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.073067 | orchestrator | 2025-09-02 00:41:39.073078 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-02 00:41:39.073089 | orchestrator | Tuesday 02 September 2025 00:41:36 +0000 (0:00:00.142) 0:00:47.327 ***** 2025-09-02 00:41:39.073099 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.073110 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.073121 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.073132 | orchestrator | 2025-09-02 00:41:39.073143 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-02 00:41:39.073154 | orchestrator | Tuesday 02 September 2025 00:41:37 +0000 (0:00:00.149) 0:00:47.477 ***** 2025-09-02 00:41:39.073165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.073176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.073186 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.073197 | orchestrator | 2025-09-02 00:41:39.073208 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-02 00:41:39.073219 | orchestrator | Tuesday 02 September 2025 00:41:37 +0000 (0:00:00.167) 0:00:47.645 ***** 2025-09-02 00:41:39.073230 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.073248 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.073259 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.073269 | orchestrator | 2025-09-02 00:41:39.073285 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-02 00:41:39.073297 | orchestrator | Tuesday 02 September 2025 00:41:37 +0000 (0:00:00.150) 0:00:47.795 ***** 2025-09-02 00:41:39.073308 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:41:39.073319 | orchestrator | 2025-09-02 00:41:39.073329 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-02 00:41:39.073340 | orchestrator | Tuesday 02 September 2025 00:41:37 +0000 (0:00:00.511) 0:00:48.307 ***** 2025-09-02 00:41:39.073351 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:41:39.073362 | orchestrator | 2025-09-02 00:41:39.073373 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-02 00:41:39.073383 | orchestrator | Tuesday 02 September 2025 00:41:38 +0000 (0:00:00.499) 0:00:48.807 ***** 2025-09-02 00:41:39.073394 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:41:39.073405 | orchestrator | 2025-09-02 00:41:39.073416 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-02 00:41:39.073427 | orchestrator | Tuesday 02 September 2025 00:41:38 +0000 (0:00:00.155) 0:00:48.962 ***** 2025-09-02 00:41:39.073438 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'vg_name': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'}) 2025-09-02 00:41:39.073450 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'vg_name': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'}) 2025-09-02 00:41:39.073461 | orchestrator | 2025-09-02 00:41:39.073472 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-02 00:41:39.073483 | orchestrator | Tuesday 02 September 2025 00:41:38 +0000 (0:00:00.180) 0:00:49.143 ***** 2025-09-02 00:41:39.073494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.073505 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.073516 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:39.073527 | orchestrator | 2025-09-02 00:41:39.073538 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-02 00:41:39.073549 | orchestrator | Tuesday 02 September 2025 00:41:38 +0000 (0:00:00.170) 0:00:49.313 ***** 2025-09-02 00:41:39.073560 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:39.073571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:39.073587 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:45.515440 | orchestrator | 2025-09-02 00:41:45.515549 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-02 00:41:45.515565 | orchestrator | Tuesday 02 September 2025 00:41:39 +0000 (0:00:00.174) 0:00:49.488 ***** 2025-09-02 00:41:45.515579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'})  2025-09-02 00:41:45.515593 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'})  2025-09-02 00:41:45.515604 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:41:45.515616 | orchestrator | 2025-09-02 00:41:45.515627 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-02 00:41:45.515638 | orchestrator | Tuesday 02 September 2025 00:41:39 +0000 (0:00:00.169) 0:00:49.658 ***** 2025-09-02 00:41:45.515675 | orchestrator | ok: [testbed-node-4] => { 2025-09-02 00:41:45.515687 | orchestrator |  "lvm_report": { 2025-09-02 00:41:45.515701 | orchestrator |  "lv": [ 2025-09-02 00:41:45.515712 | orchestrator |  { 2025-09-02 00:41:45.515724 | orchestrator |  "lv_name": "osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37", 2025-09-02 00:41:45.515736 | orchestrator |  "vg_name": "ceph-4843a7b7-fb51-5101-86f0-3e9039878e37" 2025-09-02 00:41:45.515747 | orchestrator |  }, 2025-09-02 00:41:45.515758 | orchestrator |  { 2025-09-02 00:41:45.515769 | orchestrator |  "lv_name": "osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942", 2025-09-02 00:41:45.515780 | orchestrator |  "vg_name": "ceph-de858a7c-8c7c-5154-a7df-793b28d7d942" 2025-09-02 00:41:45.515791 | orchestrator |  } 2025-09-02 00:41:45.515801 | orchestrator |  ], 2025-09-02 00:41:45.515812 | orchestrator |  "pv": [ 2025-09-02 00:41:45.515823 | orchestrator |  { 2025-09-02 00:41:45.515834 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-02 00:41:45.515907 | orchestrator |  "vg_name": "ceph-de858a7c-8c7c-5154-a7df-793b28d7d942" 2025-09-02 00:41:45.515919 | orchestrator |  }, 2025-09-02 00:41:45.515930 | orchestrator |  { 2025-09-02 00:41:45.515941 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-02 00:41:45.515951 | orchestrator |  "vg_name": "ceph-4843a7b7-fb51-5101-86f0-3e9039878e37" 2025-09-02 00:41:45.515962 | orchestrator |  } 2025-09-02 00:41:45.515972 | orchestrator |  ] 2025-09-02 00:41:45.515985 | orchestrator |  } 2025-09-02 00:41:45.515999 | orchestrator | } 2025-09-02 00:41:45.516012 | orchestrator | 2025-09-02 00:41:45.516024 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-02 00:41:45.516036 | orchestrator | 2025-09-02 00:41:45.516048 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-02 00:41:45.516061 | orchestrator | Tuesday 02 September 2025 00:41:39 +0000 (0:00:00.490) 0:00:50.148 ***** 2025-09-02 00:41:45.516074 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-02 00:41:45.516087 | orchestrator | 2025-09-02 00:41:45.516099 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-02 00:41:45.516112 | orchestrator | Tuesday 02 September 2025 00:41:39 +0000 (0:00:00.249) 0:00:50.398 ***** 2025-09-02 00:41:45.516125 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:41:45.516139 | orchestrator | 2025-09-02 00:41:45.516151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516164 | orchestrator | Tuesday 02 September 2025 00:41:40 +0000 (0:00:00.233) 0:00:50.632 ***** 2025-09-02 00:41:45.516176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-02 00:41:45.516189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-02 00:41:45.516201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-02 00:41:45.516214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-02 00:41:45.516225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-02 00:41:45.516238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-02 00:41:45.516250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-02 00:41:45.516262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-02 00:41:45.516275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-02 00:41:45.516287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-02 00:41:45.516300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-02 00:41:45.516322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-02 00:41:45.516334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-02 00:41:45.516345 | orchestrator | 2025-09-02 00:41:45.516355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516366 | orchestrator | Tuesday 02 September 2025 00:41:40 +0000 (0:00:00.418) 0:00:51.050 ***** 2025-09-02 00:41:45.516376 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:45.516392 | orchestrator | 2025-09-02 00:41:45.516403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516414 | orchestrator | Tuesday 02 September 2025 00:41:40 +0000 (0:00:00.218) 0:00:51.269 ***** 2025-09-02 00:41:45.516424 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:45.516435 | orchestrator | 2025-09-02 00:41:45.516446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516473 | orchestrator | Tuesday 02 September 2025 00:41:41 +0000 (0:00:00.210) 0:00:51.480 ***** 2025-09-02 00:41:45.516485 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:45.516496 | orchestrator | 2025-09-02 00:41:45.516507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516517 | orchestrator | Tuesday 02 September 2025 00:41:41 +0000 (0:00:00.258) 0:00:51.738 ***** 2025-09-02 00:41:45.516528 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:45.516539 | orchestrator | 2025-09-02 00:41:45.516549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516560 | orchestrator | Tuesday 02 September 2025 00:41:41 +0000 (0:00:00.227) 0:00:51.965 ***** 2025-09-02 00:41:45.516571 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:45.516582 | orchestrator | 2025-09-02 00:41:45.516640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516652 | orchestrator | Tuesday 02 September 2025 00:41:41 +0000 (0:00:00.225) 0:00:52.191 ***** 2025-09-02 00:41:45.516663 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:45.516674 | orchestrator | 2025-09-02 00:41:45.516685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516696 | orchestrator | Tuesday 02 September 2025 00:41:42 +0000 (0:00:00.756) 0:00:52.947 ***** 2025-09-02 00:41:45.516706 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:45.516717 | orchestrator | 2025-09-02 00:41:45.516728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516739 | orchestrator | Tuesday 02 September 2025 00:41:42 +0000 (0:00:00.213) 0:00:53.161 ***** 2025-09-02 00:41:45.516749 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:45.516760 | orchestrator | 2025-09-02 00:41:45.516771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516781 | orchestrator | Tuesday 02 September 2025 00:41:42 +0000 (0:00:00.238) 0:00:53.399 ***** 2025-09-02 00:41:45.516792 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5) 2025-09-02 00:41:45.516804 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5) 2025-09-02 00:41:45.516815 | orchestrator | 2025-09-02 00:41:45.516826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516837 | orchestrator | Tuesday 02 September 2025 00:41:43 +0000 (0:00:00.441) 0:00:53.841 ***** 2025-09-02 00:41:45.516875 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb) 2025-09-02 00:41:45.516887 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb) 2025-09-02 00:41:45.516898 | orchestrator | 2025-09-02 00:41:45.516908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516919 | orchestrator | Tuesday 02 September 2025 00:41:43 +0000 (0:00:00.421) 0:00:54.262 ***** 2025-09-02 00:41:45.516943 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6) 2025-09-02 00:41:45.516954 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6) 2025-09-02 00:41:45.516965 | orchestrator | 2025-09-02 00:41:45.516976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.516986 | orchestrator | Tuesday 02 September 2025 00:41:44 +0000 (0:00:00.457) 0:00:54.720 ***** 2025-09-02 00:41:45.516997 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70) 2025-09-02 00:41:45.517008 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70) 2025-09-02 00:41:45.517019 | orchestrator | 2025-09-02 00:41:45.517029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-02 00:41:45.517040 | orchestrator | Tuesday 02 September 2025 00:41:44 +0000 (0:00:00.435) 0:00:55.155 ***** 2025-09-02 00:41:45.517051 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-02 00:41:45.517061 | orchestrator | 2025-09-02 00:41:45.517072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:45.517083 | orchestrator | Tuesday 02 September 2025 00:41:45 +0000 (0:00:00.350) 0:00:55.506 ***** 2025-09-02 00:41:45.517093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-02 00:41:45.517104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-02 00:41:45.517115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-02 00:41:45.517125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-02 00:41:45.517136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-02 00:41:45.517146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-02 00:41:45.517157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-02 00:41:45.517167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-02 00:41:45.517178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-02 00:41:45.517189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-02 00:41:45.517200 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-02 00:41:45.517218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-02 00:41:54.738576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-02 00:41:54.738674 | orchestrator | 2025-09-02 00:41:54.738688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.738699 | orchestrator | Tuesday 02 September 2025 00:41:45 +0000 (0:00:00.417) 0:00:55.923 ***** 2025-09-02 00:41:54.738708 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.738718 | orchestrator | 2025-09-02 00:41:54.738728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.738737 | orchestrator | Tuesday 02 September 2025 00:41:45 +0000 (0:00:00.203) 0:00:56.127 ***** 2025-09-02 00:41:54.738746 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.738755 | orchestrator | 2025-09-02 00:41:54.738764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.738773 | orchestrator | Tuesday 02 September 2025 00:41:45 +0000 (0:00:00.234) 0:00:56.361 ***** 2025-09-02 00:41:54.738782 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.738791 | orchestrator | 2025-09-02 00:41:54.738800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.738831 | orchestrator | Tuesday 02 September 2025 00:41:46 +0000 (0:00:00.668) 0:00:57.029 ***** 2025-09-02 00:41:54.738840 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.738850 | orchestrator | 2025-09-02 00:41:54.738858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.738867 | orchestrator | Tuesday 02 September 2025 00:41:46 +0000 (0:00:00.212) 0:00:57.242 ***** 2025-09-02 00:41:54.738925 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.738936 | orchestrator | 2025-09-02 00:41:54.738944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.738953 | orchestrator | Tuesday 02 September 2025 00:41:47 +0000 (0:00:00.185) 0:00:57.428 ***** 2025-09-02 00:41:54.738962 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.738971 | orchestrator | 2025-09-02 00:41:54.738979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.738988 | orchestrator | Tuesday 02 September 2025 00:41:47 +0000 (0:00:00.200) 0:00:57.628 ***** 2025-09-02 00:41:54.738997 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739005 | orchestrator | 2025-09-02 00:41:54.739014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.739023 | orchestrator | Tuesday 02 September 2025 00:41:47 +0000 (0:00:00.213) 0:00:57.842 ***** 2025-09-02 00:41:54.739031 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739040 | orchestrator | 2025-09-02 00:41:54.739049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.739057 | orchestrator | Tuesday 02 September 2025 00:41:47 +0000 (0:00:00.209) 0:00:58.051 ***** 2025-09-02 00:41:54.739066 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-02 00:41:54.739076 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-02 00:41:54.739099 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-02 00:41:54.739108 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-02 00:41:54.739117 | orchestrator | 2025-09-02 00:41:54.739128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.739138 | orchestrator | Tuesday 02 September 2025 00:41:48 +0000 (0:00:00.658) 0:00:58.710 ***** 2025-09-02 00:41:54.739148 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739158 | orchestrator | 2025-09-02 00:41:54.739168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.739178 | orchestrator | Tuesday 02 September 2025 00:41:48 +0000 (0:00:00.215) 0:00:58.926 ***** 2025-09-02 00:41:54.739188 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739198 | orchestrator | 2025-09-02 00:41:54.739209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.739220 | orchestrator | Tuesday 02 September 2025 00:41:48 +0000 (0:00:00.194) 0:00:59.120 ***** 2025-09-02 00:41:54.739230 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739240 | orchestrator | 2025-09-02 00:41:54.739250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-02 00:41:54.739260 | orchestrator | Tuesday 02 September 2025 00:41:48 +0000 (0:00:00.206) 0:00:59.326 ***** 2025-09-02 00:41:54.739270 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739280 | orchestrator | 2025-09-02 00:41:54.739289 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-02 00:41:54.739298 | orchestrator | Tuesday 02 September 2025 00:41:49 +0000 (0:00:00.201) 0:00:59.528 ***** 2025-09-02 00:41:54.739307 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739315 | orchestrator | 2025-09-02 00:41:54.739324 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-02 00:41:54.739333 | orchestrator | Tuesday 02 September 2025 00:41:49 +0000 (0:00:00.374) 0:00:59.902 ***** 2025-09-02 00:41:54.739341 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad19e49-f824-57b0-a164-7b3912efd317'}}) 2025-09-02 00:41:54.739350 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '14a05dcf-7776-5f2b-8543-65494bada47a'}}) 2025-09-02 00:41:54.739366 | orchestrator | 2025-09-02 00:41:54.739375 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-02 00:41:54.739383 | orchestrator | Tuesday 02 September 2025 00:41:49 +0000 (0:00:00.200) 0:01:00.103 ***** 2025-09-02 00:41:54.739393 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'}) 2025-09-02 00:41:54.739404 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'}) 2025-09-02 00:41:54.739412 | orchestrator | 2025-09-02 00:41:54.739421 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-02 00:41:54.739444 | orchestrator | Tuesday 02 September 2025 00:41:51 +0000 (0:00:01.939) 0:01:02.043 ***** 2025-09-02 00:41:54.739454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:41:54.739464 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:41:54.739473 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739482 | orchestrator | 2025-09-02 00:41:54.739491 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-02 00:41:54.739499 | orchestrator | Tuesday 02 September 2025 00:41:51 +0000 (0:00:00.153) 0:01:02.197 ***** 2025-09-02 00:41:54.739508 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'}) 2025-09-02 00:41:54.739517 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'}) 2025-09-02 00:41:54.739526 | orchestrator | 2025-09-02 00:41:54.739535 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-02 00:41:54.739544 | orchestrator | Tuesday 02 September 2025 00:41:53 +0000 (0:00:01.349) 0:01:03.546 ***** 2025-09-02 00:41:54.739552 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:41:54.739561 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:41:54.739570 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739579 | orchestrator | 2025-09-02 00:41:54.739587 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-02 00:41:54.739596 | orchestrator | Tuesday 02 September 2025 00:41:53 +0000 (0:00:00.160) 0:01:03.707 ***** 2025-09-02 00:41:54.739604 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739613 | orchestrator | 2025-09-02 00:41:54.739622 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-02 00:41:54.739630 | orchestrator | Tuesday 02 September 2025 00:41:53 +0000 (0:00:00.146) 0:01:03.853 ***** 2025-09-02 00:41:54.739639 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:41:54.739653 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:41:54.739662 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739670 | orchestrator | 2025-09-02 00:41:54.739679 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-02 00:41:54.739688 | orchestrator | Tuesday 02 September 2025 00:41:53 +0000 (0:00:00.157) 0:01:04.010 ***** 2025-09-02 00:41:54.739696 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739710 | orchestrator | 2025-09-02 00:41:54.739719 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-02 00:41:54.739728 | orchestrator | Tuesday 02 September 2025 00:41:53 +0000 (0:00:00.166) 0:01:04.177 ***** 2025-09-02 00:41:54.739736 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:41:54.739745 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:41:54.739754 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739763 | orchestrator | 2025-09-02 00:41:54.739771 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-02 00:41:54.739780 | orchestrator | Tuesday 02 September 2025 00:41:53 +0000 (0:00:00.169) 0:01:04.346 ***** 2025-09-02 00:41:54.739788 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739797 | orchestrator | 2025-09-02 00:41:54.739805 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-02 00:41:54.739814 | orchestrator | Tuesday 02 September 2025 00:41:54 +0000 (0:00:00.144) 0:01:04.491 ***** 2025-09-02 00:41:54.739823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:41:54.739831 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:41:54.739840 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:41:54.739853 | orchestrator | 2025-09-02 00:41:54.739869 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-02 00:41:54.739895 | orchestrator | Tuesday 02 September 2025 00:41:54 +0000 (0:00:00.159) 0:01:04.651 ***** 2025-09-02 00:41:54.739904 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:41:54.739913 | orchestrator | 2025-09-02 00:41:54.739922 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-02 00:41:54.739930 | orchestrator | Tuesday 02 September 2025 00:41:54 +0000 (0:00:00.350) 0:01:05.001 ***** 2025-09-02 00:41:54.739945 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:01.053872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:01.058292 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.058322 | orchestrator | 2025-09-02 00:42:01.058332 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-02 00:42:01.058356 | orchestrator | Tuesday 02 September 2025 00:41:54 +0000 (0:00:00.155) 0:01:05.157 ***** 2025-09-02 00:42:01.058373 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:01.058382 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:01.058389 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.058397 | orchestrator | 2025-09-02 00:42:01.058405 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-02 00:42:01.058412 | orchestrator | Tuesday 02 September 2025 00:41:54 +0000 (0:00:00.164) 0:01:05.322 ***** 2025-09-02 00:42:01.058419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:01.058427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:01.058434 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.058463 | orchestrator | 2025-09-02 00:42:01.058471 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-02 00:42:01.058478 | orchestrator | Tuesday 02 September 2025 00:41:55 +0000 (0:00:00.155) 0:01:05.478 ***** 2025-09-02 00:42:01.058494 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.058501 | orchestrator | 2025-09-02 00:42:01.058508 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-02 00:42:01.058516 | orchestrator | Tuesday 02 September 2025 00:41:55 +0000 (0:00:00.143) 0:01:05.622 ***** 2025-09-02 00:42:01.058523 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.058530 | orchestrator | 2025-09-02 00:42:01.058545 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-02 00:42:01.058553 | orchestrator | Tuesday 02 September 2025 00:41:55 +0000 (0:00:00.133) 0:01:05.756 ***** 2025-09-02 00:42:01.058560 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.058567 | orchestrator | 2025-09-02 00:42:01.058575 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-02 00:42:01.058583 | orchestrator | Tuesday 02 September 2025 00:41:55 +0000 (0:00:00.144) 0:01:05.901 ***** 2025-09-02 00:42:01.058590 | orchestrator | ok: [testbed-node-5] => { 2025-09-02 00:42:01.058598 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-02 00:42:01.058606 | orchestrator | } 2025-09-02 00:42:01.058614 | orchestrator | 2025-09-02 00:42:01.058621 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-02 00:42:01.058629 | orchestrator | Tuesday 02 September 2025 00:41:55 +0000 (0:00:00.140) 0:01:06.041 ***** 2025-09-02 00:42:01.058636 | orchestrator | ok: [testbed-node-5] => { 2025-09-02 00:42:01.058643 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-02 00:42:01.058650 | orchestrator | } 2025-09-02 00:42:01.058658 | orchestrator | 2025-09-02 00:42:01.058665 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-02 00:42:01.058673 | orchestrator | Tuesday 02 September 2025 00:41:55 +0000 (0:00:00.135) 0:01:06.177 ***** 2025-09-02 00:42:01.058681 | orchestrator | ok: [testbed-node-5] => { 2025-09-02 00:42:01.058688 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-02 00:42:01.058695 | orchestrator | } 2025-09-02 00:42:01.058703 | orchestrator | 2025-09-02 00:42:01.058710 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-02 00:42:01.058717 | orchestrator | Tuesday 02 September 2025 00:41:55 +0000 (0:00:00.157) 0:01:06.334 ***** 2025-09-02 00:42:01.058725 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:42:01.058732 | orchestrator | 2025-09-02 00:42:01.058739 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-02 00:42:01.058746 | orchestrator | Tuesday 02 September 2025 00:41:56 +0000 (0:00:00.533) 0:01:06.868 ***** 2025-09-02 00:42:01.058753 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:42:01.058761 | orchestrator | 2025-09-02 00:42:01.058768 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-02 00:42:01.058775 | orchestrator | Tuesday 02 September 2025 00:41:56 +0000 (0:00:00.534) 0:01:07.402 ***** 2025-09-02 00:42:01.058782 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:42:01.058789 | orchestrator | 2025-09-02 00:42:01.058797 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-02 00:42:01.058804 | orchestrator | Tuesday 02 September 2025 00:41:57 +0000 (0:00:00.703) 0:01:08.105 ***** 2025-09-02 00:42:01.058811 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:42:01.058818 | orchestrator | 2025-09-02 00:42:01.058825 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-02 00:42:01.058832 | orchestrator | Tuesday 02 September 2025 00:41:57 +0000 (0:00:00.134) 0:01:08.239 ***** 2025-09-02 00:42:01.058840 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.058847 | orchestrator | 2025-09-02 00:42:01.058854 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-02 00:42:01.058861 | orchestrator | Tuesday 02 September 2025 00:41:57 +0000 (0:00:00.128) 0:01:08.368 ***** 2025-09-02 00:42:01.058874 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.058881 | orchestrator | 2025-09-02 00:42:01.058888 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-02 00:42:01.058895 | orchestrator | Tuesday 02 September 2025 00:41:58 +0000 (0:00:00.134) 0:01:08.503 ***** 2025-09-02 00:42:01.058918 | orchestrator | ok: [testbed-node-5] => { 2025-09-02 00:42:01.058938 | orchestrator |  "vgs_report": { 2025-09-02 00:42:01.058946 | orchestrator |  "vg": [] 2025-09-02 00:42:01.058974 | orchestrator |  } 2025-09-02 00:42:01.058993 | orchestrator | } 2025-09-02 00:42:01.059001 | orchestrator | 2025-09-02 00:42:01.059008 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-02 00:42:01.059015 | orchestrator | Tuesday 02 September 2025 00:41:58 +0000 (0:00:00.153) 0:01:08.656 ***** 2025-09-02 00:42:01.059023 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059030 | orchestrator | 2025-09-02 00:42:01.059037 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-02 00:42:01.059044 | orchestrator | Tuesday 02 September 2025 00:41:58 +0000 (0:00:00.168) 0:01:08.825 ***** 2025-09-02 00:42:01.059051 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059059 | orchestrator | 2025-09-02 00:42:01.059066 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-02 00:42:01.059073 | orchestrator | Tuesday 02 September 2025 00:41:58 +0000 (0:00:00.143) 0:01:08.969 ***** 2025-09-02 00:42:01.059080 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059088 | orchestrator | 2025-09-02 00:42:01.059095 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-02 00:42:01.059102 | orchestrator | Tuesday 02 September 2025 00:41:58 +0000 (0:00:00.148) 0:01:09.117 ***** 2025-09-02 00:42:01.059109 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059117 | orchestrator | 2025-09-02 00:42:01.059124 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-02 00:42:01.059131 | orchestrator | Tuesday 02 September 2025 00:41:58 +0000 (0:00:00.147) 0:01:09.265 ***** 2025-09-02 00:42:01.059138 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059146 | orchestrator | 2025-09-02 00:42:01.059153 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-02 00:42:01.059160 | orchestrator | Tuesday 02 September 2025 00:41:59 +0000 (0:00:00.163) 0:01:09.429 ***** 2025-09-02 00:42:01.059167 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059175 | orchestrator | 2025-09-02 00:42:01.059182 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-02 00:42:01.059189 | orchestrator | Tuesday 02 September 2025 00:41:59 +0000 (0:00:00.146) 0:01:09.575 ***** 2025-09-02 00:42:01.059196 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059204 | orchestrator | 2025-09-02 00:42:01.059211 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-02 00:42:01.059218 | orchestrator | Tuesday 02 September 2025 00:41:59 +0000 (0:00:00.125) 0:01:09.701 ***** 2025-09-02 00:42:01.059225 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059232 | orchestrator | 2025-09-02 00:42:01.059240 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-02 00:42:01.059247 | orchestrator | Tuesday 02 September 2025 00:41:59 +0000 (0:00:00.145) 0:01:09.847 ***** 2025-09-02 00:42:01.059254 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059261 | orchestrator | 2025-09-02 00:42:01.059269 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-02 00:42:01.059280 | orchestrator | Tuesday 02 September 2025 00:41:59 +0000 (0:00:00.367) 0:01:10.214 ***** 2025-09-02 00:42:01.059288 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059295 | orchestrator | 2025-09-02 00:42:01.059302 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-02 00:42:01.059309 | orchestrator | Tuesday 02 September 2025 00:41:59 +0000 (0:00:00.139) 0:01:10.354 ***** 2025-09-02 00:42:01.059317 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059330 | orchestrator | 2025-09-02 00:42:01.059337 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-02 00:42:01.059345 | orchestrator | Tuesday 02 September 2025 00:42:00 +0000 (0:00:00.143) 0:01:10.497 ***** 2025-09-02 00:42:01.059352 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059359 | orchestrator | 2025-09-02 00:42:01.059366 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-02 00:42:01.059374 | orchestrator | Tuesday 02 September 2025 00:42:00 +0000 (0:00:00.141) 0:01:10.638 ***** 2025-09-02 00:42:01.059381 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059388 | orchestrator | 2025-09-02 00:42:01.059396 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-02 00:42:01.059403 | orchestrator | Tuesday 02 September 2025 00:42:00 +0000 (0:00:00.148) 0:01:10.787 ***** 2025-09-02 00:42:01.059410 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059417 | orchestrator | 2025-09-02 00:42:01.059424 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-02 00:42:01.059432 | orchestrator | Tuesday 02 September 2025 00:42:00 +0000 (0:00:00.139) 0:01:10.926 ***** 2025-09-02 00:42:01.059439 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:01.059446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:01.059454 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059461 | orchestrator | 2025-09-02 00:42:01.059468 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-02 00:42:01.059475 | orchestrator | Tuesday 02 September 2025 00:42:00 +0000 (0:00:00.161) 0:01:11.088 ***** 2025-09-02 00:42:01.059483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:01.059490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:01.059497 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:01.059504 | orchestrator | 2025-09-02 00:42:01.059512 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-02 00:42:01.059519 | orchestrator | Tuesday 02 September 2025 00:42:00 +0000 (0:00:00.184) 0:01:11.272 ***** 2025-09-02 00:42:01.059531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:04.100473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:04.100575 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:04.100590 | orchestrator | 2025-09-02 00:42:04.100602 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-02 00:42:04.100615 | orchestrator | Tuesday 02 September 2025 00:42:01 +0000 (0:00:00.200) 0:01:11.473 ***** 2025-09-02 00:42:04.100627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:04.100638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:04.100648 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:04.100659 | orchestrator | 2025-09-02 00:42:04.100670 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-02 00:42:04.100681 | orchestrator | Tuesday 02 September 2025 00:42:01 +0000 (0:00:00.154) 0:01:11.627 ***** 2025-09-02 00:42:04.100692 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:04.100727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:04.100738 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:04.100749 | orchestrator | 2025-09-02 00:42:04.100760 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-02 00:42:04.100771 | orchestrator | Tuesday 02 September 2025 00:42:01 +0000 (0:00:00.169) 0:01:11.797 ***** 2025-09-02 00:42:04.100781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:04.100792 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:04.100803 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:04.100813 | orchestrator | 2025-09-02 00:42:04.100839 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-02 00:42:04.100850 | orchestrator | Tuesday 02 September 2025 00:42:01 +0000 (0:00:00.140) 0:01:11.937 ***** 2025-09-02 00:42:04.100861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:04.100872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:04.100883 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:04.100894 | orchestrator | 2025-09-02 00:42:04.100905 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-02 00:42:04.100970 | orchestrator | Tuesday 02 September 2025 00:42:01 +0000 (0:00:00.355) 0:01:12.293 ***** 2025-09-02 00:42:04.100983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:04.100994 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:04.101005 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:04.101018 | orchestrator | 2025-09-02 00:42:04.101031 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-02 00:42:04.101045 | orchestrator | Tuesday 02 September 2025 00:42:02 +0000 (0:00:00.163) 0:01:12.456 ***** 2025-09-02 00:42:04.101058 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:42:04.101073 | orchestrator | 2025-09-02 00:42:04.101085 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-02 00:42:04.101097 | orchestrator | Tuesday 02 September 2025 00:42:02 +0000 (0:00:00.520) 0:01:12.977 ***** 2025-09-02 00:42:04.101110 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:42:04.101123 | orchestrator | 2025-09-02 00:42:04.101137 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-02 00:42:04.101150 | orchestrator | Tuesday 02 September 2025 00:42:03 +0000 (0:00:00.524) 0:01:13.502 ***** 2025-09-02 00:42:04.101162 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:42:04.101175 | orchestrator | 2025-09-02 00:42:04.101188 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-02 00:42:04.101200 | orchestrator | Tuesday 02 September 2025 00:42:03 +0000 (0:00:00.152) 0:01:13.654 ***** 2025-09-02 00:42:04.101213 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'vg_name': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'}) 2025-09-02 00:42:04.101227 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'vg_name': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'}) 2025-09-02 00:42:04.101240 | orchestrator | 2025-09-02 00:42:04.101252 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-02 00:42:04.101273 | orchestrator | Tuesday 02 September 2025 00:42:03 +0000 (0:00:00.173) 0:01:13.828 ***** 2025-09-02 00:42:04.101303 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:04.101316 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:04.101329 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:04.101342 | orchestrator | 2025-09-02 00:42:04.101355 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-02 00:42:04.101368 | orchestrator | Tuesday 02 September 2025 00:42:03 +0000 (0:00:00.180) 0:01:14.008 ***** 2025-09-02 00:42:04.101379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:04.101390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:04.101402 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:04.101413 | orchestrator | 2025-09-02 00:42:04.101423 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-02 00:42:04.101434 | orchestrator | Tuesday 02 September 2025 00:42:03 +0000 (0:00:00.177) 0:01:14.185 ***** 2025-09-02 00:42:04.101445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'})  2025-09-02 00:42:04.101456 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'})  2025-09-02 00:42:04.101467 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:04.101478 | orchestrator | 2025-09-02 00:42:04.101489 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-02 00:42:04.101500 | orchestrator | Tuesday 02 September 2025 00:42:03 +0000 (0:00:00.151) 0:01:14.336 ***** 2025-09-02 00:42:04.101511 | orchestrator | ok: [testbed-node-5] => { 2025-09-02 00:42:04.101522 | orchestrator |  "lvm_report": { 2025-09-02 00:42:04.101534 | orchestrator |  "lv": [ 2025-09-02 00:42:04.101545 | orchestrator |  { 2025-09-02 00:42:04.101556 | orchestrator |  "lv_name": "osd-block-14a05dcf-7776-5f2b-8543-65494bada47a", 2025-09-02 00:42:04.101574 | orchestrator |  "vg_name": "ceph-14a05dcf-7776-5f2b-8543-65494bada47a" 2025-09-02 00:42:04.101586 | orchestrator |  }, 2025-09-02 00:42:04.101597 | orchestrator |  { 2025-09-02 00:42:04.101608 | orchestrator |  "lv_name": "osd-block-7ad19e49-f824-57b0-a164-7b3912efd317", 2025-09-02 00:42:04.101619 | orchestrator |  "vg_name": "ceph-7ad19e49-f824-57b0-a164-7b3912efd317" 2025-09-02 00:42:04.101630 | orchestrator |  } 2025-09-02 00:42:04.101640 | orchestrator |  ], 2025-09-02 00:42:04.101651 | orchestrator |  "pv": [ 2025-09-02 00:42:04.101662 | orchestrator |  { 2025-09-02 00:42:04.101673 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-02 00:42:04.101684 | orchestrator |  "vg_name": "ceph-7ad19e49-f824-57b0-a164-7b3912efd317" 2025-09-02 00:42:04.101695 | orchestrator |  }, 2025-09-02 00:42:04.101705 | orchestrator |  { 2025-09-02 00:42:04.101716 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-02 00:42:04.101727 | orchestrator |  "vg_name": "ceph-14a05dcf-7776-5f2b-8543-65494bada47a" 2025-09-02 00:42:04.101738 | orchestrator |  } 2025-09-02 00:42:04.101749 | orchestrator |  ] 2025-09-02 00:42:04.101760 | orchestrator |  } 2025-09-02 00:42:04.101771 | orchestrator | } 2025-09-02 00:42:04.101782 | orchestrator | 2025-09-02 00:42:04.101793 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:42:04.101811 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-02 00:42:04.101822 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-02 00:42:04.101833 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-02 00:42:04.101844 | orchestrator | 2025-09-02 00:42:04.101855 | orchestrator | 2025-09-02 00:42:04.101866 | orchestrator | 2025-09-02 00:42:04.101877 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:42:04.101888 | orchestrator | Tuesday 02 September 2025 00:42:04 +0000 (0:00:00.150) 0:01:14.487 ***** 2025-09-02 00:42:04.101899 | orchestrator | =============================================================================== 2025-09-02 00:42:04.101910 | orchestrator | Create block VGs -------------------------------------------------------- 5.90s 2025-09-02 00:42:04.101936 | orchestrator | Create block LVs -------------------------------------------------------- 4.19s 2025-09-02 00:42:04.101947 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.92s 2025-09-02 00:42:04.101957 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.81s 2025-09-02 00:42:04.101968 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.58s 2025-09-02 00:42:04.101979 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2025-09-02 00:42:04.101989 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2025-09-02 00:42:04.102000 | orchestrator | Add known partitions to the list of available block devices ------------- 1.54s 2025-09-02 00:42:04.102077 | orchestrator | Add known links to the list of available block devices ------------------ 1.26s 2025-09-02 00:42:04.468299 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-09-02 00:42:04.468408 | orchestrator | Print LVM report data --------------------------------------------------- 0.95s 2025-09-02 00:42:04.468422 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-09-02 00:42:04.468434 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2025-09-02 00:42:04.468445 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-09-02 00:42:04.468456 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-09-02 00:42:04.468467 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2025-09-02 00:42:04.468478 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.72s 2025-09-02 00:42:04.468488 | orchestrator | Print size needed for LVs on ceph_wal_devices --------------------------- 0.70s 2025-09-02 00:42:04.468498 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.69s 2025-09-02 00:42:04.468509 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-09-02 00:42:16.777070 | orchestrator | 2025-09-02 00:42:16 | INFO  | Task 1fd101ca-154b-4f71-9f19-3e963a4bcef5 (facts) was prepared for execution. 2025-09-02 00:42:16.777171 | orchestrator | 2025-09-02 00:42:16 | INFO  | It takes a moment until task 1fd101ca-154b-4f71-9f19-3e963a4bcef5 (facts) has been started and output is visible here. 2025-09-02 00:42:28.502391 | orchestrator | 2025-09-02 00:42:28.502495 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-02 00:42:28.502511 | orchestrator | 2025-09-02 00:42:28.502521 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-02 00:42:28.502532 | orchestrator | Tuesday 02 September 2025 00:42:20 +0000 (0:00:00.275) 0:00:00.275 ***** 2025-09-02 00:42:28.502542 | orchestrator | ok: [testbed-manager] 2025-09-02 00:42:28.502553 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:42:28.502591 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:42:28.502601 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:42:28.502611 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:42:28.502620 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:42:28.502629 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:42:28.502639 | orchestrator | 2025-09-02 00:42:28.502649 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-02 00:42:28.502659 | orchestrator | Tuesday 02 September 2025 00:42:21 +0000 (0:00:01.078) 0:00:01.353 ***** 2025-09-02 00:42:28.502669 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:42:28.502679 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:42:28.502689 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:42:28.502699 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:42:28.502709 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:42:28.502719 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:42:28.502728 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:28.502738 | orchestrator | 2025-09-02 00:42:28.502747 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-02 00:42:28.502757 | orchestrator | 2025-09-02 00:42:28.502767 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-02 00:42:28.502776 | orchestrator | Tuesday 02 September 2025 00:42:23 +0000 (0:00:01.213) 0:00:02.567 ***** 2025-09-02 00:42:28.502786 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:42:28.502795 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:42:28.502805 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:42:28.502814 | orchestrator | ok: [testbed-manager] 2025-09-02 00:42:28.502824 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:42:28.502833 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:42:28.502843 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:42:28.502852 | orchestrator | 2025-09-02 00:42:28.502862 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-02 00:42:28.502872 | orchestrator | 2025-09-02 00:42:28.502881 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-02 00:42:28.502891 | orchestrator | Tuesday 02 September 2025 00:42:27 +0000 (0:00:04.430) 0:00:06.998 ***** 2025-09-02 00:42:28.502901 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:42:28.502910 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:42:28.502920 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:42:28.502929 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:42:28.502941 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:42:28.502952 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:42:28.502962 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:42:28.502973 | orchestrator | 2025-09-02 00:42:28.502985 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:42:28.502996 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:42:28.503035 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:42:28.503047 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:42:28.503059 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:42:28.503070 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:42:28.503082 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:42:28.503093 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:42:28.503112 | orchestrator | 2025-09-02 00:42:28.503123 | orchestrator | 2025-09-02 00:42:28.503135 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:42:28.503146 | orchestrator | Tuesday 02 September 2025 00:42:28 +0000 (0:00:00.539) 0:00:07.537 ***** 2025-09-02 00:42:28.503158 | orchestrator | =============================================================================== 2025-09-02 00:42:28.503167 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.43s 2025-09-02 00:42:28.503177 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2025-09-02 00:42:28.503186 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-09-02 00:42:28.503196 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-09-02 00:42:40.863220 | orchestrator | 2025-09-02 00:42:40 | INFO  | Task 42ab09c2-faaa-43ae-909c-2d62132ba150 (frr) was prepared for execution. 2025-09-02 00:42:40.863326 | orchestrator | 2025-09-02 00:42:40 | INFO  | It takes a moment until task 42ab09c2-faaa-43ae-909c-2d62132ba150 (frr) has been started and output is visible here. 2025-09-02 00:43:06.857202 | orchestrator | 2025-09-02 00:43:06.857295 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-02 00:43:06.857311 | orchestrator | 2025-09-02 00:43:06.857322 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-02 00:43:06.857333 | orchestrator | Tuesday 02 September 2025 00:42:44 +0000 (0:00:00.235) 0:00:00.235 ***** 2025-09-02 00:43:06.857358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-02 00:43:06.857370 | orchestrator | 2025-09-02 00:43:06.857380 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-02 00:43:06.857390 | orchestrator | Tuesday 02 September 2025 00:42:45 +0000 (0:00:00.220) 0:00:00.456 ***** 2025-09-02 00:43:06.857400 | orchestrator | changed: [testbed-manager] 2025-09-02 00:43:06.857410 | orchestrator | 2025-09-02 00:43:06.857420 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-02 00:43:06.857430 | orchestrator | Tuesday 02 September 2025 00:42:46 +0000 (0:00:01.179) 0:00:01.636 ***** 2025-09-02 00:43:06.857440 | orchestrator | changed: [testbed-manager] 2025-09-02 00:43:06.857450 | orchestrator | 2025-09-02 00:43:06.857464 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-02 00:43:06.857474 | orchestrator | Tuesday 02 September 2025 00:42:56 +0000 (0:00:09.819) 0:00:11.455 ***** 2025-09-02 00:43:06.857484 | orchestrator | ok: [testbed-manager] 2025-09-02 00:43:06.857494 | orchestrator | 2025-09-02 00:43:06.857504 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-02 00:43:06.857514 | orchestrator | Tuesday 02 September 2025 00:42:57 +0000 (0:00:01.273) 0:00:12.728 ***** 2025-09-02 00:43:06.857524 | orchestrator | changed: [testbed-manager] 2025-09-02 00:43:06.857534 | orchestrator | 2025-09-02 00:43:06.857544 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-02 00:43:06.857554 | orchestrator | Tuesday 02 September 2025 00:42:58 +0000 (0:00:00.975) 0:00:13.704 ***** 2025-09-02 00:43:06.857564 | orchestrator | ok: [testbed-manager] 2025-09-02 00:43:06.857573 | orchestrator | 2025-09-02 00:43:06.857583 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-02 00:43:06.857593 | orchestrator | Tuesday 02 September 2025 00:42:59 +0000 (0:00:01.167) 0:00:14.872 ***** 2025-09-02 00:43:06.857603 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 00:43:06.857613 | orchestrator | 2025-09-02 00:43:06.857623 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-02 00:43:06.857633 | orchestrator | Tuesday 02 September 2025 00:43:00 +0000 (0:00:00.821) 0:00:15.693 ***** 2025-09-02 00:43:06.857642 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:43:06.857652 | orchestrator | 2025-09-02 00:43:06.857663 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-02 00:43:06.857691 | orchestrator | Tuesday 02 September 2025 00:43:00 +0000 (0:00:00.164) 0:00:15.858 ***** 2025-09-02 00:43:06.857701 | orchestrator | changed: [testbed-manager] 2025-09-02 00:43:06.857711 | orchestrator | 2025-09-02 00:43:06.857721 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-02 00:43:06.857731 | orchestrator | Tuesday 02 September 2025 00:43:01 +0000 (0:00:00.970) 0:00:16.829 ***** 2025-09-02 00:43:06.857742 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-02 00:43:06.857753 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-02 00:43:06.857765 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-02 00:43:06.857777 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-02 00:43:06.857789 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-02 00:43:06.857801 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-02 00:43:06.857812 | orchestrator | 2025-09-02 00:43:06.857823 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-02 00:43:06.857835 | orchestrator | Tuesday 02 September 2025 00:43:03 +0000 (0:00:02.240) 0:00:19.070 ***** 2025-09-02 00:43:06.857846 | orchestrator | ok: [testbed-manager] 2025-09-02 00:43:06.857857 | orchestrator | 2025-09-02 00:43:06.857869 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-02 00:43:06.857880 | orchestrator | Tuesday 02 September 2025 00:43:05 +0000 (0:00:01.404) 0:00:20.474 ***** 2025-09-02 00:43:06.857891 | orchestrator | changed: [testbed-manager] 2025-09-02 00:43:06.857902 | orchestrator | 2025-09-02 00:43:06.857914 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:43:06.857925 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 00:43:06.857937 | orchestrator | 2025-09-02 00:43:06.857948 | orchestrator | 2025-09-02 00:43:06.857960 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:43:06.857971 | orchestrator | Tuesday 02 September 2025 00:43:06 +0000 (0:00:01.436) 0:00:21.911 ***** 2025-09-02 00:43:06.857983 | orchestrator | =============================================================================== 2025-09-02 00:43:06.857994 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.82s 2025-09-02 00:43:06.858005 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.24s 2025-09-02 00:43:06.858062 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.44s 2025-09-02 00:43:06.858076 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.40s 2025-09-02 00:43:06.858103 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.27s 2025-09-02 00:43:06.858113 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.18s 2025-09-02 00:43:06.858123 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.17s 2025-09-02 00:43:06.858162 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2025-09-02 00:43:06.858174 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.97s 2025-09-02 00:43:06.858184 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.82s 2025-09-02 00:43:06.858193 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-09-02 00:43:06.858203 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-02 00:43:07.126101 | orchestrator | 2025-09-02 00:43:07.127422 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Sep 2 00:43:07 UTC 2025 2025-09-02 00:43:07.127458 | orchestrator | 2025-09-02 00:43:09.133562 | orchestrator | 2025-09-02 00:43:09 | INFO  | Collection nutshell is prepared for execution 2025-09-02 00:43:09.133650 | orchestrator | 2025-09-02 00:43:09 | INFO  | D [0] - dotfiles 2025-09-02 00:43:19.216235 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [0] - homer 2025-09-02 00:43:19.216324 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [0] - netdata 2025-09-02 00:43:19.216341 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [0] - openstackclient 2025-09-02 00:43:19.216353 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [0] - phpmyadmin 2025-09-02 00:43:19.216364 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [0] - common 2025-09-02 00:43:19.219916 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [1] -- loadbalancer 2025-09-02 00:43:19.219946 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [2] --- opensearch 2025-09-02 00:43:19.220208 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [2] --- mariadb-ng 2025-09-02 00:43:19.220556 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [3] ---- horizon 2025-09-02 00:43:19.220798 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [3] ---- keystone 2025-09-02 00:43:19.221245 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [4] ----- neutron 2025-09-02 00:43:19.221641 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [5] ------ wait-for-nova 2025-09-02 00:43:19.221859 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [5] ------ octavia 2025-09-02 00:43:19.223555 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [4] ----- barbican 2025-09-02 00:43:19.224408 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [4] ----- designate 2025-09-02 00:43:19.224431 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [4] ----- ironic 2025-09-02 00:43:19.224445 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [4] ----- placement 2025-09-02 00:43:19.224719 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [4] ----- magnum 2025-09-02 00:43:19.225731 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [1] -- openvswitch 2025-09-02 00:43:19.225754 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [2] --- ovn 2025-09-02 00:43:19.226313 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [1] -- memcached 2025-09-02 00:43:19.226455 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [1] -- redis 2025-09-02 00:43:19.226853 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [1] -- rabbitmq-ng 2025-09-02 00:43:19.227079 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [0] - kubernetes 2025-09-02 00:43:19.229622 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [1] -- kubeconfig 2025-09-02 00:43:19.229902 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [1] -- copy-kubeconfig 2025-09-02 00:43:19.230003 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [0] - ceph 2025-09-02 00:43:19.232421 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [1] -- ceph-pools 2025-09-02 00:43:19.232749 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [2] --- copy-ceph-keys 2025-09-02 00:43:19.232770 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [3] ---- cephclient 2025-09-02 00:43:19.232781 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-02 00:43:19.232908 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [4] ----- wait-for-keystone 2025-09-02 00:43:19.233723 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-02 00:43:19.233743 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [5] ------ glance 2025-09-02 00:43:19.233754 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [5] ------ cinder 2025-09-02 00:43:19.233764 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [5] ------ nova 2025-09-02 00:43:19.234196 | orchestrator | 2025-09-02 00:43:19 | INFO  | A [4] ----- prometheus 2025-09-02 00:43:19.234221 | orchestrator | 2025-09-02 00:43:19 | INFO  | D [5] ------ grafana 2025-09-02 00:43:19.438397 | orchestrator | 2025-09-02 00:43:19 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-02 00:43:19.438501 | orchestrator | 2025-09-02 00:43:19 | INFO  | Tasks are running in the background 2025-09-02 00:43:22.500215 | orchestrator | 2025-09-02 00:43:22 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-02 00:43:24.644944 | orchestrator | 2025-09-02 00:43:24 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:24.650499 | orchestrator | 2025-09-02 00:43:24 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:24.650914 | orchestrator | 2025-09-02 00:43:24 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state STARTED 2025-09-02 00:43:24.651565 | orchestrator | 2025-09-02 00:43:24 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:24.652155 | orchestrator | 2025-09-02 00:43:24 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:24.653554 | orchestrator | 2025-09-02 00:43:24 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:24.655721 | orchestrator | 2025-09-02 00:43:24 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:24.655769 | orchestrator | 2025-09-02 00:43:24 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:27.703048 | orchestrator | 2025-09-02 00:43:27 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:27.703446 | orchestrator | 2025-09-02 00:43:27 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:27.705965 | orchestrator | 2025-09-02 00:43:27 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state STARTED 2025-09-02 00:43:27.706657 | orchestrator | 2025-09-02 00:43:27 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:27.707384 | orchestrator | 2025-09-02 00:43:27 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:27.709903 | orchestrator | 2025-09-02 00:43:27 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:27.710685 | orchestrator | 2025-09-02 00:43:27 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:27.710785 | orchestrator | 2025-09-02 00:43:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:30.769674 | orchestrator | 2025-09-02 00:43:30 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:30.769894 | orchestrator | 2025-09-02 00:43:30 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:30.770402 | orchestrator | 2025-09-02 00:43:30 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state STARTED 2025-09-02 00:43:30.774102 | orchestrator | 2025-09-02 00:43:30 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:30.774536 | orchestrator | 2025-09-02 00:43:30 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:30.775172 | orchestrator | 2025-09-02 00:43:30 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:30.775764 | orchestrator | 2025-09-02 00:43:30 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:30.775791 | orchestrator | 2025-09-02 00:43:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:33.930354 | orchestrator | 2025-09-02 00:43:33 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:33.930432 | orchestrator | 2025-09-02 00:43:33 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:33.930444 | orchestrator | 2025-09-02 00:43:33 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state STARTED 2025-09-02 00:43:33.930455 | orchestrator | 2025-09-02 00:43:33 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:33.930465 | orchestrator | 2025-09-02 00:43:33 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:33.930475 | orchestrator | 2025-09-02 00:43:33 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:33.930485 | orchestrator | 2025-09-02 00:43:33 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:33.930495 | orchestrator | 2025-09-02 00:43:33 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:36.977722 | orchestrator | 2025-09-02 00:43:36 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:36.977814 | orchestrator | 2025-09-02 00:43:36 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:36.977830 | orchestrator | 2025-09-02 00:43:36 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state STARTED 2025-09-02 00:43:36.977841 | orchestrator | 2025-09-02 00:43:36 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:36.977853 | orchestrator | 2025-09-02 00:43:36 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:36.977864 | orchestrator | 2025-09-02 00:43:36 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:36.977875 | orchestrator | 2025-09-02 00:43:36 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:36.977885 | orchestrator | 2025-09-02 00:43:36 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:39.993975 | orchestrator | 2025-09-02 00:43:39 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:39.994127 | orchestrator | 2025-09-02 00:43:39 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:39.994146 | orchestrator | 2025-09-02 00:43:39 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state STARTED 2025-09-02 00:43:39.994158 | orchestrator | 2025-09-02 00:43:39 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:39.994170 | orchestrator | 2025-09-02 00:43:39 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:39.994181 | orchestrator | 2025-09-02 00:43:39 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:39.994191 | orchestrator | 2025-09-02 00:43:39 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:39.994202 | orchestrator | 2025-09-02 00:43:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:43.137366 | orchestrator | 2025-09-02 00:43:43 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:43.137451 | orchestrator | 2025-09-02 00:43:43 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:43.137465 | orchestrator | 2025-09-02 00:43:43 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state STARTED 2025-09-02 00:43:43.137476 | orchestrator | 2025-09-02 00:43:43 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:43.137508 | orchestrator | 2025-09-02 00:43:43 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:43.137519 | orchestrator | 2025-09-02 00:43:43 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:43.137529 | orchestrator | 2025-09-02 00:43:43 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:43.137539 | orchestrator | 2025-09-02 00:43:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:46.405138 | orchestrator | 2025-09-02 00:43:46 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:46.405217 | orchestrator | 2025-09-02 00:43:46 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:46.405230 | orchestrator | 2025-09-02 00:43:46 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state STARTED 2025-09-02 00:43:46.405240 | orchestrator | 2025-09-02 00:43:46 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:46.405250 | orchestrator | 2025-09-02 00:43:46 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:46.405259 | orchestrator | 2025-09-02 00:43:46 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:46.405299 | orchestrator | 2025-09-02 00:43:46 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:46.405309 | orchestrator | 2025-09-02 00:43:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:49.420197 | orchestrator | 2025-09-02 00:43:49 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:49.421256 | orchestrator | 2025-09-02 00:43:49 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:49.424086 | orchestrator | 2025-09-02 00:43:49 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state STARTED 2025-09-02 00:43:49.424118 | orchestrator | 2025-09-02 00:43:49 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:49.424466 | orchestrator | 2025-09-02 00:43:49 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:49.425361 | orchestrator | 2025-09-02 00:43:49 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:49.426332 | orchestrator | 2025-09-02 00:43:49 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:49.426367 | orchestrator | 2025-09-02 00:43:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:52.526652 | orchestrator | 2025-09-02 00:43:52 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:52.527019 | orchestrator | 2025-09-02 00:43:52 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:52.529420 | orchestrator | 2025-09-02 00:43:52.529451 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-02 00:43:52.529464 | orchestrator | 2025-09-02 00:43:52.529475 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-02 00:43:52.529487 | orchestrator | Tuesday 02 September 2025 00:43:32 +0000 (0:00:00.734) 0:00:00.734 ***** 2025-09-02 00:43:52.529498 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:43:52.529510 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:43:52.529521 | orchestrator | changed: [testbed-manager] 2025-09-02 00:43:52.529533 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:43:52.529544 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:43:52.529555 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:43:52.529586 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:43:52.529598 | orchestrator | 2025-09-02 00:43:52.529609 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-02 00:43:52.529620 | orchestrator | Tuesday 02 September 2025 00:43:37 +0000 (0:00:05.309) 0:00:06.044 ***** 2025-09-02 00:43:52.529631 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-02 00:43:52.529642 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-02 00:43:52.529653 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-02 00:43:52.529664 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-02 00:43:52.529674 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-02 00:43:52.529685 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-02 00:43:52.529696 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-02 00:43:52.529706 | orchestrator | 2025-09-02 00:43:52.529717 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-02 00:43:52.529728 | orchestrator | Tuesday 02 September 2025 00:43:40 +0000 (0:00:02.163) 0:00:08.208 ***** 2025-09-02 00:43:52.529771 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-02 00:43:39.277606', 'end': '2025-09-02 00:43:39.285718', 'delta': '0:00:00.008112', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-02 00:43:52.529788 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-02 00:43:38.752019', 'end': '2025-09-02 00:43:38.761458', 'delta': '0:00:00.009439', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-02 00:43:52.530103 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-02 00:43:38.932160', 'end': '2025-09-02 00:43:38.940850', 'delta': '0:00:00.008690', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-02 00:43:52.530163 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-02 00:43:38.639281', 'end': '2025-09-02 00:43:38.644777', 'delta': '0:00:00.005496', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-02 00:43:52.530209 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-02 00:43:39.531325', 'end': '2025-09-02 00:43:39.540681', 'delta': '0:00:00.009356', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-02 00:43:52.530261 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-02 00:43:39.629484', 'end': '2025-09-02 00:43:39.638675', 'delta': '0:00:00.009191', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-02 00:43:52.530276 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-02 00:43:39.796422', 'end': '2025-09-02 00:43:39.805865', 'delta': '0:00:00.009443', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-02 00:43:52.530310 | orchestrator | 2025-09-02 00:43:52.530324 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-02 00:43:52.530336 | orchestrator | Tuesday 02 September 2025 00:43:42 +0000 (0:00:02.279) 0:00:10.487 ***** 2025-09-02 00:43:52.530349 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-02 00:43:52.530362 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-02 00:43:52.530375 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-02 00:43:52.530386 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-02 00:43:52.530397 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-02 00:43:52.530408 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-02 00:43:52.530419 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-02 00:43:52.530430 | orchestrator | 2025-09-02 00:43:52.530440 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-02 00:43:52.530451 | orchestrator | Tuesday 02 September 2025 00:43:43 +0000 (0:00:01.477) 0:00:11.965 ***** 2025-09-02 00:43:52.530469 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-02 00:43:52.530481 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-02 00:43:52.530492 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-02 00:43:52.530503 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-02 00:43:52.530514 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-02 00:43:52.530524 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-02 00:43:52.530535 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-02 00:43:52.530546 | orchestrator | 2025-09-02 00:43:52.530557 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:43:52.530577 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:43:52.530590 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:43:52.530601 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:43:52.530612 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:43:52.530624 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:43:52.530635 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:43:52.530646 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:43:52.530657 | orchestrator | 2025-09-02 00:43:52.530668 | orchestrator | 2025-09-02 00:43:52.530680 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:43:52.530691 | orchestrator | Tuesday 02 September 2025 00:43:49 +0000 (0:00:05.331) 0:00:17.296 ***** 2025-09-02 00:43:52.530702 | orchestrator | =============================================================================== 2025-09-02 00:43:52.530713 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 5.33s 2025-09-02 00:43:52.530724 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.31s 2025-09-02 00:43:52.530739 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.28s 2025-09-02 00:43:52.530751 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.16s 2025-09-02 00:43:52.530762 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.48s 2025-09-02 00:43:52.530773 | orchestrator | 2025-09-02 00:43:52 | INFO  | Task b46be20a-e463-4436-bdc5-0da25a81d812 is in state SUCCESS 2025-09-02 00:43:52.532505 | orchestrator | 2025-09-02 00:43:52 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:52.533530 | orchestrator | 2025-09-02 00:43:52 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:52.534525 | orchestrator | 2025-09-02 00:43:52 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:52.536522 | orchestrator | 2025-09-02 00:43:52 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:43:52.537551 | orchestrator | 2025-09-02 00:43:52 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:52.537573 | orchestrator | 2025-09-02 00:43:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:55.620711 | orchestrator | 2025-09-02 00:43:55 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:55.621453 | orchestrator | 2025-09-02 00:43:55 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:55.623832 | orchestrator | 2025-09-02 00:43:55 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:55.626880 | orchestrator | 2025-09-02 00:43:55 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:55.629375 | orchestrator | 2025-09-02 00:43:55 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:55.632658 | orchestrator | 2025-09-02 00:43:55 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:43:55.633505 | orchestrator | 2025-09-02 00:43:55 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:55.633529 | orchestrator | 2025-09-02 00:43:55 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:43:58.742073 | orchestrator | 2025-09-02 00:43:58 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:43:58.742174 | orchestrator | 2025-09-02 00:43:58 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:43:58.744508 | orchestrator | 2025-09-02 00:43:58 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:43:58.754474 | orchestrator | 2025-09-02 00:43:58 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:43:58.754499 | orchestrator | 2025-09-02 00:43:58 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:43:58.754511 | orchestrator | 2025-09-02 00:43:58 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:43:58.754523 | orchestrator | 2025-09-02 00:43:58 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:43:58.754535 | orchestrator | 2025-09-02 00:43:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:01.852978 | orchestrator | 2025-09-02 00:44:01 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:01.853060 | orchestrator | 2025-09-02 00:44:01 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:44:01.853073 | orchestrator | 2025-09-02 00:44:01 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:01.853084 | orchestrator | 2025-09-02 00:44:01 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:44:01.853095 | orchestrator | 2025-09-02 00:44:01 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:01.853105 | orchestrator | 2025-09-02 00:44:01 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:01.853116 | orchestrator | 2025-09-02 00:44:01 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:01.853126 | orchestrator | 2025-09-02 00:44:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:04.908982 | orchestrator | 2025-09-02 00:44:04 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:04.909068 | orchestrator | 2025-09-02 00:44:04 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:44:04.909099 | orchestrator | 2025-09-02 00:44:04 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:04.909123 | orchestrator | 2025-09-02 00:44:04 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:44:04.909615 | orchestrator | 2025-09-02 00:44:04 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:04.910213 | orchestrator | 2025-09-02 00:44:04 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:04.911039 | orchestrator | 2025-09-02 00:44:04 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:04.911065 | orchestrator | 2025-09-02 00:44:04 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:07.998476 | orchestrator | 2025-09-02 00:44:07 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:07.998566 | orchestrator | 2025-09-02 00:44:07 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:44:07.998582 | orchestrator | 2025-09-02 00:44:07 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:07.998594 | orchestrator | 2025-09-02 00:44:07 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:44:07.998604 | orchestrator | 2025-09-02 00:44:07 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:07.998615 | orchestrator | 2025-09-02 00:44:07 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:07.998626 | orchestrator | 2025-09-02 00:44:07 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:07.998637 | orchestrator | 2025-09-02 00:44:07 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:11.172674 | orchestrator | 2025-09-02 00:44:11 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:11.173328 | orchestrator | 2025-09-02 00:44:11 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:44:11.174113 | orchestrator | 2025-09-02 00:44:11 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:11.176929 | orchestrator | 2025-09-02 00:44:11 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:44:11.176953 | orchestrator | 2025-09-02 00:44:11 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:11.178566 | orchestrator | 2025-09-02 00:44:11 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:11.179557 | orchestrator | 2025-09-02 00:44:11 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:11.180243 | orchestrator | 2025-09-02 00:44:11 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:14.355751 | orchestrator | 2025-09-02 00:44:14 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:14.355863 | orchestrator | 2025-09-02 00:44:14 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state STARTED 2025-09-02 00:44:14.355878 | orchestrator | 2025-09-02 00:44:14 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:14.355890 | orchestrator | 2025-09-02 00:44:14 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:44:14.355900 | orchestrator | 2025-09-02 00:44:14 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:14.355911 | orchestrator | 2025-09-02 00:44:14 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:14.355922 | orchestrator | 2025-09-02 00:44:14 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:14.355933 | orchestrator | 2025-09-02 00:44:14 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:17.269490 | orchestrator | 2025-09-02 00:44:17 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:17.269631 | orchestrator | 2025-09-02 00:44:17 | INFO  | Task b8fdfc25-c1b4-487f-b023-6ec1bc90136f is in state SUCCESS 2025-09-02 00:44:17.274496 | orchestrator | 2025-09-02 00:44:17 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:17.275116 | orchestrator | 2025-09-02 00:44:17 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:44:17.275201 | orchestrator | 2025-09-02 00:44:17 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:17.278869 | orchestrator | 2025-09-02 00:44:17 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:17.278948 | orchestrator | 2025-09-02 00:44:17 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:17.278965 | orchestrator | 2025-09-02 00:44:17 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:20.489090 | orchestrator | 2025-09-02 00:44:20 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:20.489169 | orchestrator | 2025-09-02 00:44:20 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:20.489181 | orchestrator | 2025-09-02 00:44:20 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state STARTED 2025-09-02 00:44:20.489191 | orchestrator | 2025-09-02 00:44:20 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:20.489200 | orchestrator | 2025-09-02 00:44:20 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:20.489208 | orchestrator | 2025-09-02 00:44:20 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:20.489218 | orchestrator | 2025-09-02 00:44:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:23.608179 | orchestrator | 2025-09-02 00:44:23 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:23.613700 | orchestrator | 2025-09-02 00:44:23 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:23.613739 | orchestrator | 2025-09-02 00:44:23 | INFO  | Task 6b917a20-51cb-488b-adef-2088057493a6 is in state SUCCESS 2025-09-02 00:44:23.613753 | orchestrator | 2025-09-02 00:44:23 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:23.615808 | orchestrator | 2025-09-02 00:44:23 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:23.618245 | orchestrator | 2025-09-02 00:44:23 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:23.618271 | orchestrator | 2025-09-02 00:44:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:26.737653 | orchestrator | 2025-09-02 00:44:26 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:26.738226 | orchestrator | 2025-09-02 00:44:26 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:26.739454 | orchestrator | 2025-09-02 00:44:26 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:26.741186 | orchestrator | 2025-09-02 00:44:26 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:26.742109 | orchestrator | 2025-09-02 00:44:26 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:26.742708 | orchestrator | 2025-09-02 00:44:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:29.797936 | orchestrator | 2025-09-02 00:44:29 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:29.798130 | orchestrator | 2025-09-02 00:44:29 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:29.798649 | orchestrator | 2025-09-02 00:44:29 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:29.799269 | orchestrator | 2025-09-02 00:44:29 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:29.800045 | orchestrator | 2025-09-02 00:44:29 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:29.800072 | orchestrator | 2025-09-02 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:32.840703 | orchestrator | 2025-09-02 00:44:32 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:32.845278 | orchestrator | 2025-09-02 00:44:32 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:32.846106 | orchestrator | 2025-09-02 00:44:32 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:32.846958 | orchestrator | 2025-09-02 00:44:32 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:32.847750 | orchestrator | 2025-09-02 00:44:32 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:32.847768 | orchestrator | 2025-09-02 00:44:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:35.946259 | orchestrator | 2025-09-02 00:44:35 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:35.946365 | orchestrator | 2025-09-02 00:44:35 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:35.946380 | orchestrator | 2025-09-02 00:44:35 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:35.946392 | orchestrator | 2025-09-02 00:44:35 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:35.946403 | orchestrator | 2025-09-02 00:44:35 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:35.946415 | orchestrator | 2025-09-02 00:44:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:39.015842 | orchestrator | 2025-09-02 00:44:39 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:39.015960 | orchestrator | 2025-09-02 00:44:39 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:39.015977 | orchestrator | 2025-09-02 00:44:39 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:39.015989 | orchestrator | 2025-09-02 00:44:39 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:39.016000 | orchestrator | 2025-09-02 00:44:39 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:39.016012 | orchestrator | 2025-09-02 00:44:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:42.084848 | orchestrator | 2025-09-02 00:44:42 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:42.087726 | orchestrator | 2025-09-02 00:44:42 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:42.089416 | orchestrator | 2025-09-02 00:44:42 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:42.089502 | orchestrator | 2025-09-02 00:44:42 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:42.093306 | orchestrator | 2025-09-02 00:44:42 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:42.093351 | orchestrator | 2025-09-02 00:44:42 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:45.130742 | orchestrator | 2025-09-02 00:44:45 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:45.131816 | orchestrator | 2025-09-02 00:44:45 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:45.134960 | orchestrator | 2025-09-02 00:44:45 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:45.135738 | orchestrator | 2025-09-02 00:44:45 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:45.136791 | orchestrator | 2025-09-02 00:44:45 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:45.136813 | orchestrator | 2025-09-02 00:44:45 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:48.179976 | orchestrator | 2025-09-02 00:44:48 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:48.180915 | orchestrator | 2025-09-02 00:44:48 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state STARTED 2025-09-02 00:44:48.182388 | orchestrator | 2025-09-02 00:44:48 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:48.184070 | orchestrator | 2025-09-02 00:44:48 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:48.186748 | orchestrator | 2025-09-02 00:44:48 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:48.186805 | orchestrator | 2025-09-02 00:44:48 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:51.223915 | orchestrator | 2025-09-02 00:44:51 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:51.225780 | orchestrator | 2025-09-02 00:44:51 | INFO  | Task 8dfc2997-94a2-4f20-b6a9-1f2a08ff4350 is in state SUCCESS 2025-09-02 00:44:51.231427 | orchestrator | 2025-09-02 00:44:51.231444 | orchestrator | 2025-09-02 00:44:51.231448 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-02 00:44:51.231453 | orchestrator | 2025-09-02 00:44:51.231457 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-02 00:44:51.231462 | orchestrator | Tuesday 02 September 2025 00:43:34 +0000 (0:00:01.323) 0:00:01.325 ***** 2025-09-02 00:44:51.231467 | orchestrator | ok: [testbed-manager] => { 2025-09-02 00:44:51.231473 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-02 00:44:51.231479 | orchestrator | } 2025-09-02 00:44:51.231483 | orchestrator | 2025-09-02 00:44:51.231487 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-02 00:44:51.231520 | orchestrator | Tuesday 02 September 2025 00:43:34 +0000 (0:00:00.488) 0:00:01.813 ***** 2025-09-02 00:44:51.231556 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.231561 | orchestrator | 2025-09-02 00:44:51.231565 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-02 00:44:51.231569 | orchestrator | Tuesday 02 September 2025 00:43:36 +0000 (0:00:02.082) 0:00:03.896 ***** 2025-09-02 00:44:51.231573 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-02 00:44:51.231577 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-02 00:44:51.231581 | orchestrator | 2025-09-02 00:44:51.231585 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-02 00:44:51.231589 | orchestrator | Tuesday 02 September 2025 00:43:38 +0000 (0:00:01.867) 0:00:05.763 ***** 2025-09-02 00:44:51.231593 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.231598 | orchestrator | 2025-09-02 00:44:51.231602 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-02 00:44:51.231606 | orchestrator | Tuesday 02 September 2025 00:43:43 +0000 (0:00:04.850) 0:00:10.614 ***** 2025-09-02 00:44:51.231625 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.231629 | orchestrator | 2025-09-02 00:44:51.231633 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-02 00:44:51.231636 | orchestrator | Tuesday 02 September 2025 00:43:45 +0000 (0:00:01.613) 0:00:12.228 ***** 2025-09-02 00:44:51.231640 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-02 00:44:51.231644 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.231648 | orchestrator | 2025-09-02 00:44:51.231651 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-02 00:44:51.231655 | orchestrator | Tuesday 02 September 2025 00:44:14 +0000 (0:00:28.994) 0:00:41.223 ***** 2025-09-02 00:44:51.231659 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.231663 | orchestrator | 2025-09-02 00:44:51.231666 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:44:51.231670 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:44:51.231676 | orchestrator | 2025-09-02 00:44:51.231680 | orchestrator | 2025-09-02 00:44:51.231684 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:44:51.231687 | orchestrator | Tuesday 02 September 2025 00:44:16 +0000 (0:00:02.706) 0:00:43.930 ***** 2025-09-02 00:44:51.231691 | orchestrator | =============================================================================== 2025-09-02 00:44:51.231695 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.99s 2025-09-02 00:44:51.231698 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.85s 2025-09-02 00:44:51.231702 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.71s 2025-09-02 00:44:51.231706 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.08s 2025-09-02 00:44:51.231710 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.87s 2025-09-02 00:44:51.231713 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.61s 2025-09-02 00:44:51.231717 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.49s 2025-09-02 00:44:51.231721 | orchestrator | 2025-09-02 00:44:51.231724 | orchestrator | 2025-09-02 00:44:51.231728 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-02 00:44:51.231732 | orchestrator | 2025-09-02 00:44:51.231736 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-02 00:44:51.231740 | orchestrator | Tuesday 02 September 2025 00:43:34 +0000 (0:00:01.129) 0:00:01.129 ***** 2025-09-02 00:44:51.231744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-02 00:44:51.231750 | orchestrator | 2025-09-02 00:44:51.231753 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-02 00:44:51.231757 | orchestrator | Tuesday 02 September 2025 00:43:34 +0000 (0:00:00.430) 0:00:01.559 ***** 2025-09-02 00:44:51.231761 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-02 00:44:51.231764 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-02 00:44:51.231768 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-02 00:44:51.231772 | orchestrator | 2025-09-02 00:44:51.231776 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-02 00:44:51.231779 | orchestrator | Tuesday 02 September 2025 00:43:36 +0000 (0:00:02.348) 0:00:03.908 ***** 2025-09-02 00:44:51.231783 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.231787 | orchestrator | 2025-09-02 00:44:51.231791 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-02 00:44:51.231795 | orchestrator | Tuesday 02 September 2025 00:43:39 +0000 (0:00:02.118) 0:00:06.027 ***** 2025-09-02 00:44:51.231809 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-02 00:44:51.231813 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.231817 | orchestrator | 2025-09-02 00:44:51.234586 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-02 00:44:51.234657 | orchestrator | Tuesday 02 September 2025 00:44:11 +0000 (0:00:32.654) 0:00:38.681 ***** 2025-09-02 00:44:51.234672 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.234685 | orchestrator | 2025-09-02 00:44:51.234697 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-02 00:44:51.234730 | orchestrator | Tuesday 02 September 2025 00:44:15 +0000 (0:00:04.076) 0:00:42.757 ***** 2025-09-02 00:44:51.234742 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.234754 | orchestrator | 2025-09-02 00:44:51.234765 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-02 00:44:51.234777 | orchestrator | Tuesday 02 September 2025 00:44:16 +0000 (0:00:00.913) 0:00:43.671 ***** 2025-09-02 00:44:51.234788 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.234799 | orchestrator | 2025-09-02 00:44:51.234811 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-02 00:44:51.234822 | orchestrator | Tuesday 02 September 2025 00:44:19 +0000 (0:00:02.688) 0:00:46.360 ***** 2025-09-02 00:44:51.234833 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.234844 | orchestrator | 2025-09-02 00:44:51.234855 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-02 00:44:51.234877 | orchestrator | Tuesday 02 September 2025 00:44:20 +0000 (0:00:01.193) 0:00:47.554 ***** 2025-09-02 00:44:51.234889 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.234900 | orchestrator | 2025-09-02 00:44:51.234911 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-02 00:44:51.234922 | orchestrator | Tuesday 02 September 2025 00:44:21 +0000 (0:00:00.958) 0:00:48.512 ***** 2025-09-02 00:44:51.234933 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.234944 | orchestrator | 2025-09-02 00:44:51.234955 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:44:51.234966 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:44:51.234979 | orchestrator | 2025-09-02 00:44:51.234990 | orchestrator | 2025-09-02 00:44:51.235001 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:44:51.235012 | orchestrator | Tuesday 02 September 2025 00:44:22 +0000 (0:00:00.606) 0:00:49.119 ***** 2025-09-02 00:44:51.235023 | orchestrator | =============================================================================== 2025-09-02 00:44:51.235034 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.65s 2025-09-02 00:44:51.235045 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 4.08s 2025-09-02 00:44:51.235056 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.69s 2025-09-02 00:44:51.235068 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.35s 2025-09-02 00:44:51.235079 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.12s 2025-09-02 00:44:51.235089 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.19s 2025-09-02 00:44:51.235100 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.96s 2025-09-02 00:44:51.235111 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.91s 2025-09-02 00:44:51.235122 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.61s 2025-09-02 00:44:51.235133 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.43s 2025-09-02 00:44:51.235144 | orchestrator | 2025-09-02 00:44:51.235773 | orchestrator | 2025-09-02 00:44:51.235803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:44:51.235816 | orchestrator | 2025-09-02 00:44:51.235820 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:44:51.235824 | orchestrator | Tuesday 02 September 2025 00:43:34 +0000 (0:00:00.353) 0:00:00.353 ***** 2025-09-02 00:44:51.235828 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-02 00:44:51.235832 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-02 00:44:51.235836 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-02 00:44:51.235840 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-02 00:44:51.235844 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-02 00:44:51.235848 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-02 00:44:51.235852 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-02 00:44:51.235856 | orchestrator | 2025-09-02 00:44:51.235859 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-02 00:44:51.235863 | orchestrator | 2025-09-02 00:44:51.235867 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-02 00:44:51.235871 | orchestrator | Tuesday 02 September 2025 00:43:36 +0000 (0:00:01.310) 0:00:01.664 ***** 2025-09-02 00:44:51.236987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:44:51.237006 | orchestrator | 2025-09-02 00:44:51.237011 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-02 00:44:51.237016 | orchestrator | Tuesday 02 September 2025 00:43:37 +0000 (0:00:01.440) 0:00:03.104 ***** 2025-09-02 00:44:51.237020 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:44:51.237024 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.237029 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:44:51.237032 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:44:51.237036 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:44:51.237040 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:44:51.237044 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:44:51.237048 | orchestrator | 2025-09-02 00:44:51.237052 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-02 00:44:51.237056 | orchestrator | Tuesday 02 September 2025 00:43:40 +0000 (0:00:02.722) 0:00:05.827 ***** 2025-09-02 00:44:51.237060 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:44:51.237063 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.237067 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:44:51.237071 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:44:51.237075 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:44:51.237078 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:44:51.237086 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:44:51.237090 | orchestrator | 2025-09-02 00:44:51.237094 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-02 00:44:51.237098 | orchestrator | Tuesday 02 September 2025 00:43:44 +0000 (0:00:04.131) 0:00:09.959 ***** 2025-09-02 00:44:51.237102 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:44:51.237106 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:44:51.237110 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:44:51.237113 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:44:51.237117 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:44:51.237121 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:44:51.237125 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.237128 | orchestrator | 2025-09-02 00:44:51.237132 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-02 00:44:51.237136 | orchestrator | Tuesday 02 September 2025 00:43:47 +0000 (0:00:03.058) 0:00:13.017 ***** 2025-09-02 00:44:51.237140 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:44:51.237144 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:44:51.237147 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:44:51.237157 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:44:51.237161 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:44:51.237165 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:44:51.237168 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.237172 | orchestrator | 2025-09-02 00:44:51.237176 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-02 00:44:51.237180 | orchestrator | Tuesday 02 September 2025 00:44:01 +0000 (0:00:14.260) 0:00:27.278 ***** 2025-09-02 00:44:51.237184 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:44:51.237187 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:44:51.237191 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:44:51.237195 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:44:51.237199 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:44:51.237202 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:44:51.237206 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.237210 | orchestrator | 2025-09-02 00:44:51.237214 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-02 00:44:51.237217 | orchestrator | Tuesday 02 September 2025 00:44:27 +0000 (0:00:25.646) 0:00:52.925 ***** 2025-09-02 00:44:51.237222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:44:51.237227 | orchestrator | 2025-09-02 00:44:51.237231 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-02 00:44:51.237235 | orchestrator | Tuesday 02 September 2025 00:44:28 +0000 (0:00:01.383) 0:00:54.308 ***** 2025-09-02 00:44:51.237239 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-02 00:44:51.237243 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-02 00:44:51.237247 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-02 00:44:51.237251 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-02 00:44:51.237264 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-02 00:44:51.237268 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-02 00:44:51.237271 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-02 00:44:51.237275 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-02 00:44:51.237279 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-02 00:44:51.237283 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-02 00:44:51.237287 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-02 00:44:51.237290 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-02 00:44:51.237294 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-02 00:44:51.237298 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-02 00:44:51.237302 | orchestrator | 2025-09-02 00:44:51.237306 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-02 00:44:51.237310 | orchestrator | Tuesday 02 September 2025 00:44:32 +0000 (0:00:04.118) 0:00:58.427 ***** 2025-09-02 00:44:51.237314 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.237317 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:44:51.237321 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:44:51.237325 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:44:51.237329 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:44:51.237332 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:44:51.237336 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:44:51.237340 | orchestrator | 2025-09-02 00:44:51.237344 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-02 00:44:51.237348 | orchestrator | Tuesday 02 September 2025 00:44:34 +0000 (0:00:01.484) 0:00:59.912 ***** 2025-09-02 00:44:51.237351 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:44:51.237355 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:44:51.237359 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:44:51.237365 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.237369 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:44:51.237373 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:44:51.237377 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:44:51.237381 | orchestrator | 2025-09-02 00:44:51.237384 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-02 00:44:51.237388 | orchestrator | Tuesday 02 September 2025 00:44:36 +0000 (0:00:02.116) 0:01:02.029 ***** 2025-09-02 00:44:51.237392 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.237396 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:44:51.237399 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:44:51.237403 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:44:51.237407 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:44:51.237411 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:44:51.237414 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:44:51.237418 | orchestrator | 2025-09-02 00:44:51.237422 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-02 00:44:51.237426 | orchestrator | Tuesday 02 September 2025 00:44:37 +0000 (0:00:01.201) 0:01:03.230 ***** 2025-09-02 00:44:51.237432 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:44:51.237436 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:44:51.237440 | orchestrator | ok: [testbed-manager] 2025-09-02 00:44:51.237443 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:44:51.237447 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:44:51.237451 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:44:51.237454 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:44:51.237458 | orchestrator | 2025-09-02 00:44:51.237462 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-02 00:44:51.237466 | orchestrator | Tuesday 02 September 2025 00:44:39 +0000 (0:00:02.323) 0:01:05.554 ***** 2025-09-02 00:44:51.237470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-02 00:44:51.237483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:44:51.237487 | orchestrator | 2025-09-02 00:44:51.237504 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-02 00:44:51.237508 | orchestrator | Tuesday 02 September 2025 00:44:42 +0000 (0:00:02.243) 0:01:07.798 ***** 2025-09-02 00:44:51.237512 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.237516 | orchestrator | 2025-09-02 00:44:51.237519 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-02 00:44:51.237523 | orchestrator | Tuesday 02 September 2025 00:44:44 +0000 (0:00:02.221) 0:01:10.019 ***** 2025-09-02 00:44:51.237527 | orchestrator | changed: [testbed-manager] 2025-09-02 00:44:51.237531 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:44:51.237535 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:44:51.237538 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:44:51.237542 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:44:51.237546 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:44:51.237550 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:44:51.237553 | orchestrator | 2025-09-02 00:44:51.237557 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:44:51.237561 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:44:51.237566 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:44:51.237570 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:44:51.237574 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:44:51.237583 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:44:51.237587 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:44:51.237591 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:44:51.237595 | orchestrator | 2025-09-02 00:44:51.237599 | orchestrator | 2025-09-02 00:44:51.237602 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:44:51.237606 | orchestrator | Tuesday 02 September 2025 00:44:47 +0000 (0:00:02.925) 0:01:12.945 ***** 2025-09-02 00:44:51.237610 | orchestrator | =============================================================================== 2025-09-02 00:44:51.237614 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 25.65s 2025-09-02 00:44:51.237618 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.26s 2025-09-02 00:44:51.237621 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.13s 2025-09-02 00:44:51.237625 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.12s 2025-09-02 00:44:51.237629 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.06s 2025-09-02 00:44:51.237633 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.93s 2025-09-02 00:44:51.237636 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.72s 2025-09-02 00:44:51.237640 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.32s 2025-09-02 00:44:51.237644 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.24s 2025-09-02 00:44:51.237647 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.22s 2025-09-02 00:44:51.237651 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.12s 2025-09-02 00:44:51.237655 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.48s 2025-09-02 00:44:51.237659 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.44s 2025-09-02 00:44:51.237662 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.38s 2025-09-02 00:44:51.237666 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.31s 2025-09-02 00:44:51.237670 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.20s 2025-09-02 00:44:51.245010 | orchestrator | 2025-09-02 00:44:51 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:51.246153 | orchestrator | 2025-09-02 00:44:51 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:51.248719 | orchestrator | 2025-09-02 00:44:51 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:51.248739 | orchestrator | 2025-09-02 00:44:51 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:54.290310 | orchestrator | 2025-09-02 00:44:54 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:54.293159 | orchestrator | 2025-09-02 00:44:54 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:54.294960 | orchestrator | 2025-09-02 00:44:54 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:54.296825 | orchestrator | 2025-09-02 00:44:54 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:54.297077 | orchestrator | 2025-09-02 00:44:54 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:44:57.343458 | orchestrator | 2025-09-02 00:44:57 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:44:57.344811 | orchestrator | 2025-09-02 00:44:57 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:44:57.345889 | orchestrator | 2025-09-02 00:44:57 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state STARTED 2025-09-02 00:44:57.347094 | orchestrator | 2025-09-02 00:44:57 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:44:57.347119 | orchestrator | 2025-09-02 00:44:57 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:00.380384 | orchestrator | 2025-09-02 00:45:00 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:00.382276 | orchestrator | 2025-09-02 00:45:00 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:00.384025 | orchestrator | 2025-09-02 00:45:00 | INFO  | Task 5fbdf246-5acb-4362-9ac4-283c485e5dd0 is in state SUCCESS 2025-09-02 00:45:00.385736 | orchestrator | 2025-09-02 00:45:00 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:00.385760 | orchestrator | 2025-09-02 00:45:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:03.431331 | orchestrator | 2025-09-02 00:45:03 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:03.431821 | orchestrator | 2025-09-02 00:45:03 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:03.432539 | orchestrator | 2025-09-02 00:45:03 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:03.432625 | orchestrator | 2025-09-02 00:45:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:06.534357 | orchestrator | 2025-09-02 00:45:06 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:06.534467 | orchestrator | 2025-09-02 00:45:06 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:06.537113 | orchestrator | 2025-09-02 00:45:06 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:06.537144 | orchestrator | 2025-09-02 00:45:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:09.572504 | orchestrator | 2025-09-02 00:45:09 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:09.575257 | orchestrator | 2025-09-02 00:45:09 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:09.576402 | orchestrator | 2025-09-02 00:45:09 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:09.576433 | orchestrator | 2025-09-02 00:45:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:12.632009 | orchestrator | 2025-09-02 00:45:12 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:12.636244 | orchestrator | 2025-09-02 00:45:12 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:12.638098 | orchestrator | 2025-09-02 00:45:12 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:12.638452 | orchestrator | 2025-09-02 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:15.687483 | orchestrator | 2025-09-02 00:45:15 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:15.688662 | orchestrator | 2025-09-02 00:45:15 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:15.689686 | orchestrator | 2025-09-02 00:45:15 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:15.689735 | orchestrator | 2025-09-02 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:18.744484 | orchestrator | 2025-09-02 00:45:18 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:18.748758 | orchestrator | 2025-09-02 00:45:18 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:18.752550 | orchestrator | 2025-09-02 00:45:18 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:18.752574 | orchestrator | 2025-09-02 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:21.827088 | orchestrator | 2025-09-02 00:45:21 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:21.829955 | orchestrator | 2025-09-02 00:45:21 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:21.832384 | orchestrator | 2025-09-02 00:45:21 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:21.832409 | orchestrator | 2025-09-02 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:24.899168 | orchestrator | 2025-09-02 00:45:24 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:24.900976 | orchestrator | 2025-09-02 00:45:24 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:24.903254 | orchestrator | 2025-09-02 00:45:24 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:24.903485 | orchestrator | 2025-09-02 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:27.947556 | orchestrator | 2025-09-02 00:45:27 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:27.948496 | orchestrator | 2025-09-02 00:45:27 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:27.950006 | orchestrator | 2025-09-02 00:45:27 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:27.950179 | orchestrator | 2025-09-02 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:30.999287 | orchestrator | 2025-09-02 00:45:30 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:31.000433 | orchestrator | 2025-09-02 00:45:31 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:31.002811 | orchestrator | 2025-09-02 00:45:31 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:31.002836 | orchestrator | 2025-09-02 00:45:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:34.044901 | orchestrator | 2025-09-02 00:45:34 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:34.047408 | orchestrator | 2025-09-02 00:45:34 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:34.048893 | orchestrator | 2025-09-02 00:45:34 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:34.049313 | orchestrator | 2025-09-02 00:45:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:37.099257 | orchestrator | 2025-09-02 00:45:37 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:37.102521 | orchestrator | 2025-09-02 00:45:37 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:37.103352 | orchestrator | 2025-09-02 00:45:37 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:37.103785 | orchestrator | 2025-09-02 00:45:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:40.145542 | orchestrator | 2025-09-02 00:45:40 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:40.146784 | orchestrator | 2025-09-02 00:45:40 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:40.149369 | orchestrator | 2025-09-02 00:45:40 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:40.149396 | orchestrator | 2025-09-02 00:45:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:43.199449 | orchestrator | 2025-09-02 00:45:43 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:43.199801 | orchestrator | 2025-09-02 00:45:43 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:43.201209 | orchestrator | 2025-09-02 00:45:43 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:43.201245 | orchestrator | 2025-09-02 00:45:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:46.256201 | orchestrator | 2025-09-02 00:45:46 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:46.256283 | orchestrator | 2025-09-02 00:45:46 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:46.256610 | orchestrator | 2025-09-02 00:45:46 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:46.257286 | orchestrator | 2025-09-02 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:49.298580 | orchestrator | 2025-09-02 00:45:49 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:49.299487 | orchestrator | 2025-09-02 00:45:49 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:49.299948 | orchestrator | 2025-09-02 00:45:49 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:49.299971 | orchestrator | 2025-09-02 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:52.343020 | orchestrator | 2025-09-02 00:45:52 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:52.344224 | orchestrator | 2025-09-02 00:45:52 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:52.346649 | orchestrator | 2025-09-02 00:45:52 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:52.348685 | orchestrator | 2025-09-02 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:55.388816 | orchestrator | 2025-09-02 00:45:55 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:55.389491 | orchestrator | 2025-09-02 00:45:55 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:55.390501 | orchestrator | 2025-09-02 00:45:55 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:55.390635 | orchestrator | 2025-09-02 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:45:58.434504 | orchestrator | 2025-09-02 00:45:58 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:45:58.436500 | orchestrator | 2025-09-02 00:45:58 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:45:58.437899 | orchestrator | 2025-09-02 00:45:58 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state STARTED 2025-09-02 00:45:58.438303 | orchestrator | 2025-09-02 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:01.501860 | orchestrator | 2025-09-02 00:46:01 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:01.501952 | orchestrator | 2025-09-02 00:46:01 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:01.501967 | orchestrator | 2025-09-02 00:46:01 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:01.501979 | orchestrator | 2025-09-02 00:46:01 | INFO  | Task 749bc552-c95d-4fd8-a314-3a65d75bc93c is in state STARTED 2025-09-02 00:46:01.501991 | orchestrator | 2025-09-02 00:46:01 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:01.502002 | orchestrator | 2025-09-02 00:46:01 | INFO  | Task 414ecc6d-4891-4236-a856-360fe6501860 is in state SUCCESS 2025-09-02 00:46:01.502934 | orchestrator | 2025-09-02 00:46:01.502967 | orchestrator | 2025-09-02 00:46:01.502979 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-02 00:46:01.502992 | orchestrator | 2025-09-02 00:46:01.503003 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-02 00:46:01.503014 | orchestrator | Tuesday 02 September 2025 00:43:56 +0000 (0:00:00.421) 0:00:00.421 ***** 2025-09-02 00:46:01.503026 | orchestrator | ok: [testbed-manager] 2025-09-02 00:46:01.503038 | orchestrator | 2025-09-02 00:46:01.503049 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-02 00:46:01.503060 | orchestrator | Tuesday 02 September 2025 00:43:57 +0000 (0:00:01.082) 0:00:01.503 ***** 2025-09-02 00:46:01.503072 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-02 00:46:01.503083 | orchestrator | 2025-09-02 00:46:01.503095 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-02 00:46:01.503106 | orchestrator | Tuesday 02 September 2025 00:43:58 +0000 (0:00:00.724) 0:00:02.228 ***** 2025-09-02 00:46:01.503117 | orchestrator | changed: [testbed-manager] 2025-09-02 00:46:01.503128 | orchestrator | 2025-09-02 00:46:01.503146 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-02 00:46:01.503158 | orchestrator | Tuesday 02 September 2025 00:43:59 +0000 (0:00:01.501) 0:00:03.730 ***** 2025-09-02 00:46:01.503168 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-02 00:46:01.503180 | orchestrator | ok: [testbed-manager] 2025-09-02 00:46:01.503191 | orchestrator | 2025-09-02 00:46:01.503202 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-02 00:46:01.503213 | orchestrator | Tuesday 02 September 2025 00:44:48 +0000 (0:00:48.874) 0:00:52.605 ***** 2025-09-02 00:46:01.503224 | orchestrator | changed: [testbed-manager] 2025-09-02 00:46:01.503235 | orchestrator | 2025-09-02 00:46:01.503246 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:46:01.503257 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:46:01.503270 | orchestrator | 2025-09-02 00:46:01.503281 | orchestrator | 2025-09-02 00:46:01.503292 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:46:01.503304 | orchestrator | Tuesday 02 September 2025 00:44:58 +0000 (0:00:10.152) 0:01:02.757 ***** 2025-09-02 00:46:01.503315 | orchestrator | =============================================================================== 2025-09-02 00:46:01.503326 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 48.87s 2025-09-02 00:46:01.503337 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 10.15s 2025-09-02 00:46:01.503348 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.50s 2025-09-02 00:46:01.503359 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.08s 2025-09-02 00:46:01.503370 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.72s 2025-09-02 00:46:01.503381 | orchestrator | 2025-09-02 00:46:01.503412 | orchestrator | 2025-09-02 00:46:01.503424 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-02 00:46:01.503435 | orchestrator | 2025-09-02 00:46:01.503445 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-02 00:46:01.503456 | orchestrator | Tuesday 02 September 2025 00:43:24 +0000 (0:00:00.313) 0:00:00.313 ***** 2025-09-02 00:46:01.503678 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:46:01.503697 | orchestrator | 2025-09-02 00:46:01.503710 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-02 00:46:01.503724 | orchestrator | Tuesday 02 September 2025 00:43:26 +0000 (0:00:01.749) 0:00:02.063 ***** 2025-09-02 00:46:01.503738 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-02 00:46:01.503751 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-02 00:46:01.503763 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-02 00:46:01.503841 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-02 00:46:01.503854 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-02 00:46:01.503866 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-02 00:46:01.503880 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-02 00:46:01.503893 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-02 00:46:01.503904 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-02 00:46:01.503914 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-02 00:46:01.503927 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-02 00:46:01.503937 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-02 00:46:01.503948 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-02 00:46:01.503959 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-02 00:46:01.503970 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-02 00:46:01.503981 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-02 00:46:01.504005 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-02 00:46:01.504016 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-02 00:46:01.504027 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-02 00:46:01.504038 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-02 00:46:01.504049 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-02 00:46:01.504060 | orchestrator | 2025-09-02 00:46:01.504070 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-02 00:46:01.504081 | orchestrator | Tuesday 02 September 2025 00:43:30 +0000 (0:00:04.451) 0:00:06.514 ***** 2025-09-02 00:46:01.504097 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:46:01.504108 | orchestrator | 2025-09-02 00:46:01.504145 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-02 00:46:01.504156 | orchestrator | Tuesday 02 September 2025 00:43:31 +0000 (0:00:01.212) 0:00:07.727 ***** 2025-09-02 00:46:01.504170 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.504193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.504204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.504215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.504225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.504235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.504280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.504298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504316 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504467 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.504507 | orchestrator | 2025-09-02 00:46:01.504517 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-02 00:46:01.504551 | orchestrator | Tuesday 02 September 2025 00:43:37 +0000 (0:00:05.519) 0:00:13.247 ***** 2025-09-02 00:46:01.504574 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.504590 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504608 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504618 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:46:01.504628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.504639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.504670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504718 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:46:01.504728 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:46:01.504754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.504783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.504815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504835 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:46:01.504845 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:46:01.504855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.504880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504901 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:46:01.504911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.504928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.504948 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:46:01.504958 | orchestrator | 2025-09-02 00:46:01.504968 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-02 00:46:01.504978 | orchestrator | Tuesday 02 September 2025 00:43:39 +0000 (0:00:01.836) 0:00:15.084 ***** 2025-09-02 00:46:01.504988 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.504998 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.505020 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.505036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.505046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.505057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.505067 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:46:01.505077 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:46:01.505087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.505097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.505107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.505124 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:46:01.505135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.507090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.507135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.507148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.507159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.507170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.507180 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:46:01.507190 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:46:01.507200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.507221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.507248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.507258 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:46:01.507269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-02 00:46:01.507285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.507295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.507305 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:46:01.507315 | orchestrator | 2025-09-02 00:46:01.507325 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-02 00:46:01.507336 | orchestrator | Tuesday 02 September 2025 00:43:41 +0000 (0:00:02.639) 0:00:17.723 ***** 2025-09-02 00:46:01.507346 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:46:01.507355 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:46:01.507365 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:46:01.507375 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:46:01.507385 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:46:01.507395 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:46:01.507404 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:46:01.507414 | orchestrator | 2025-09-02 00:46:01.507424 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-02 00:46:01.507434 | orchestrator | Tuesday 02 September 2025 00:43:42 +0000 (0:00:01.123) 0:00:18.847 ***** 2025-09-02 00:46:01.507450 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:46:01.507460 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:46:01.507470 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:46:01.507480 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:46:01.507489 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:46:01.507499 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:46:01.507508 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:46:01.507518 | orchestrator | 2025-09-02 00:46:01.507528 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-02 00:46:01.507537 | orchestrator | Tuesday 02 September 2025 00:43:44 +0000 (0:00:01.702) 0:00:20.549 ***** 2025-09-02 00:46:01.507548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.507558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.507574 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.507585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.507600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.507611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.507672 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.507683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507868 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507879 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.507889 | orchestrator | 2025-09-02 00:46:01.507898 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-02 00:46:01.507915 | orchestrator | Tuesday 02 September 2025 00:43:53 +0000 (0:00:09.109) 0:00:29.659 ***** 2025-09-02 00:46:01.507925 | orchestrator | [WARNING]: Skipped 2025-09-02 00:46:01.507936 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-02 00:46:01.507946 | orchestrator | to this access issue: 2025-09-02 00:46:01.507956 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-02 00:46:01.507966 | orchestrator | directory 2025-09-02 00:46:01.507976 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 00:46:01.507985 | orchestrator | 2025-09-02 00:46:01.507995 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-02 00:46:01.508005 | orchestrator | Tuesday 02 September 2025 00:43:55 +0000 (0:00:01.375) 0:00:31.034 ***** 2025-09-02 00:46:01.508015 | orchestrator | [WARNING]: Skipped 2025-09-02 00:46:01.508025 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-02 00:46:01.508035 | orchestrator | to this access issue: 2025-09-02 00:46:01.508045 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-02 00:46:01.508054 | orchestrator | directory 2025-09-02 00:46:01.508064 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 00:46:01.508074 | orchestrator | 2025-09-02 00:46:01.508084 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-02 00:46:01.508094 | orchestrator | Tuesday 02 September 2025 00:43:57 +0000 (0:00:02.071) 0:00:33.106 ***** 2025-09-02 00:46:01.508103 | orchestrator | [WARNING]: Skipped 2025-09-02 00:46:01.508113 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-02 00:46:01.508123 | orchestrator | to this access issue: 2025-09-02 00:46:01.508132 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-02 00:46:01.508142 | orchestrator | directory 2025-09-02 00:46:01.508152 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 00:46:01.508162 | orchestrator | 2025-09-02 00:46:01.508171 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-02 00:46:01.508181 | orchestrator | Tuesday 02 September 2025 00:43:58 +0000 (0:00:01.208) 0:00:34.314 ***** 2025-09-02 00:46:01.508191 | orchestrator | [WARNING]: Skipped 2025-09-02 00:46:01.508201 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-02 00:46:01.508210 | orchestrator | to this access issue: 2025-09-02 00:46:01.508220 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-02 00:46:01.508230 | orchestrator | directory 2025-09-02 00:46:01.508239 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 00:46:01.508249 | orchestrator | 2025-09-02 00:46:01.508259 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-02 00:46:01.508269 | orchestrator | Tuesday 02 September 2025 00:43:59 +0000 (0:00:00.887) 0:00:35.202 ***** 2025-09-02 00:46:01.508279 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:01.508288 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:01.508298 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:46:01.508308 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:01.508317 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:46:01.508327 | orchestrator | changed: [testbed-manager] 2025-09-02 00:46:01.508336 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:46:01.508346 | orchestrator | 2025-09-02 00:46:01.508356 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-02 00:46:01.508366 | orchestrator | Tuesday 02 September 2025 00:44:04 +0000 (0:00:05.631) 0:00:40.833 ***** 2025-09-02 00:46:01.508376 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-02 00:46:01.508386 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-02 00:46:01.508396 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-02 00:46:01.508416 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-02 00:46:01.508426 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-02 00:46:01.508436 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-02 00:46:01.508446 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-02 00:46:01.508455 | orchestrator | 2025-09-02 00:46:01.508465 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-02 00:46:01.508475 | orchestrator | Tuesday 02 September 2025 00:44:09 +0000 (0:00:05.013) 0:00:45.847 ***** 2025-09-02 00:46:01.508485 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:01.508494 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:01.508504 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:01.508514 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:46:01.508524 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:46:01.508537 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:46:01.508547 | orchestrator | changed: [testbed-manager] 2025-09-02 00:46:01.508557 | orchestrator | 2025-09-02 00:46:01.508567 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-02 00:46:01.508576 | orchestrator | Tuesday 02 September 2025 00:44:13 +0000 (0:00:04.068) 0:00:49.916 ***** 2025-09-02 00:46:01.508586 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.508597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.508607 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.508635 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.508645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.508667 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.508677 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.508688 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.508704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.508714 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.508725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.508735 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.508755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.508766 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.508797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:46:01.508807 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.508818 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.508828 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.508838 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.508848 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.508864 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.508874 | orchestrator | 2025-09-02 00:46:01.508884 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-02 00:46:01.508894 | orchestrator | Tuesday 02 September 2025 00:44:16 +0000 (0:00:02.134) 0:00:52.051 ***** 2025-09-02 00:46:01.508904 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-02 00:46:01.508913 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-02 00:46:01.508923 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-02 00:46:01.508941 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-02 00:46:01.508951 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-02 00:46:01.508961 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-02 00:46:01.508971 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-02 00:46:01.508980 | orchestrator | 2025-09-02 00:46:01.508990 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-02 00:46:01.509000 | orchestrator | Tuesday 02 September 2025 00:44:19 +0000 (0:00:03.240) 0:00:55.291 ***** 2025-09-02 00:46:01.509010 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-02 00:46:01.509019 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-02 00:46:01.509033 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-02 00:46:01.509043 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-02 00:46:01.509053 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-02 00:46:01.509063 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-02 00:46:01.509072 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-02 00:46:01.509082 | orchestrator | 2025-09-02 00:46:01.509092 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-02 00:46:01.509102 | orchestrator | Tuesday 02 September 2025 00:44:22 +0000 (0:00:03.065) 0:00:58.357 ***** 2025-09-02 00:46:01.509112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.509123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.509138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.509149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.509159 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.509175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509210 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.509236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-02 00:46:01.509262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509290 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509311 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:46:01.509381 | orchestrator | 2025-09-02 00:46:01.509396 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-02 00:46:01.509406 | orchestrator | Tuesday 02 September 2025 00:44:27 +0000 (0:00:04.746) 0:01:03.104 ***** 2025-09-02 00:46:01.509416 | orchestrator | changed: [testbed-manager] 2025-09-02 00:46:01.509426 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:01.509435 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:01.509445 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:01.509455 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:46:01.509465 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:46:01.509474 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:46:01.509484 | orchestrator | 2025-09-02 00:46:01.509494 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-02 00:46:01.509503 | orchestrator | Tuesday 02 September 2025 00:44:29 +0000 (0:00:01.929) 0:01:05.034 ***** 2025-09-02 00:46:01.509513 | orchestrator | changed: [testbed-manager] 2025-09-02 00:46:01.509523 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:01.509532 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:01.509542 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:46:01.509551 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:01.509561 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:46:01.509571 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:46:01.509580 | orchestrator | 2025-09-02 00:46:01.509594 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-02 00:46:01.509604 | orchestrator | Tuesday 02 September 2025 00:44:30 +0000 (0:00:01.591) 0:01:06.625 ***** 2025-09-02 00:46:01.509620 | orchestrator | 2025-09-02 00:46:01.509630 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-02 00:46:01.509640 | orchestrator | Tuesday 02 September 2025 00:44:30 +0000 (0:00:00.060) 0:01:06.685 ***** 2025-09-02 00:46:01.509650 | orchestrator | 2025-09-02 00:46:01.509660 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-02 00:46:01.509669 | orchestrator | Tuesday 02 September 2025 00:44:30 +0000 (0:00:00.064) 0:01:06.750 ***** 2025-09-02 00:46:01.509679 | orchestrator | 2025-09-02 00:46:01.509689 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-02 00:46:01.509698 | orchestrator | Tuesday 02 September 2025 00:44:30 +0000 (0:00:00.068) 0:01:06.818 ***** 2025-09-02 00:46:01.509708 | orchestrator | 2025-09-02 00:46:01.509717 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-02 00:46:01.509727 | orchestrator | Tuesday 02 September 2025 00:44:31 +0000 (0:00:00.219) 0:01:07.038 ***** 2025-09-02 00:46:01.509737 | orchestrator | 2025-09-02 00:46:01.509746 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-02 00:46:01.509756 | orchestrator | Tuesday 02 September 2025 00:44:31 +0000 (0:00:00.074) 0:01:07.113 ***** 2025-09-02 00:46:01.509766 | orchestrator | 2025-09-02 00:46:01.509821 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-02 00:46:01.509831 | orchestrator | Tuesday 02 September 2025 00:44:31 +0000 (0:00:00.071) 0:01:07.185 ***** 2025-09-02 00:46:01.509841 | orchestrator | 2025-09-02 00:46:01.509850 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-02 00:46:01.509860 | orchestrator | Tuesday 02 September 2025 00:44:31 +0000 (0:00:00.202) 0:01:07.387 ***** 2025-09-02 00:46:01.509870 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:01.509879 | orchestrator | changed: [testbed-manager] 2025-09-02 00:46:01.509889 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:01.509899 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:01.509908 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:46:01.509918 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:46:01.509928 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:46:01.509937 | orchestrator | 2025-09-02 00:46:01.509946 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-02 00:46:01.509954 | orchestrator | Tuesday 02 September 2025 00:45:04 +0000 (0:00:32.741) 0:01:40.128 ***** 2025-09-02 00:46:01.509962 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:01.509970 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:46:01.509978 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:46:01.509986 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:46:01.509994 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:01.510002 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:01.510009 | orchestrator | changed: [testbed-manager] 2025-09-02 00:46:01.510046 | orchestrator | 2025-09-02 00:46:01.510055 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-02 00:46:01.510063 | orchestrator | Tuesday 02 September 2025 00:45:46 +0000 (0:00:41.884) 0:02:22.013 ***** 2025-09-02 00:46:01.510071 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:46:01.510079 | orchestrator | ok: [testbed-manager] 2025-09-02 00:46:01.510087 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:46:01.510095 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:46:01.510103 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:46:01.510111 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:46:01.510119 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:46:01.510126 | orchestrator | 2025-09-02 00:46:01.510135 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-02 00:46:01.510142 | orchestrator | Tuesday 02 September 2025 00:45:48 +0000 (0:00:02.543) 0:02:24.556 ***** 2025-09-02 00:46:01.510150 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:01.510158 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:01.510166 | orchestrator | changed: [testbed-manager] 2025-09-02 00:46:01.510174 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:01.510188 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:46:01.510196 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:46:01.510204 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:46:01.510212 | orchestrator | 2025-09-02 00:46:01.510220 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:46:01.510229 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-02 00:46:01.510237 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-02 00:46:01.510250 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-02 00:46:01.510259 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-02 00:46:01.510267 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-02 00:46:01.510275 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-02 00:46:01.510283 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-02 00:46:01.510291 | orchestrator | 2025-09-02 00:46:01.510299 | orchestrator | 2025-09-02 00:46:01.510307 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:46:01.510319 | orchestrator | Tuesday 02 September 2025 00:45:58 +0000 (0:00:09.586) 0:02:34.143 ***** 2025-09-02 00:46:01.510327 | orchestrator | =============================================================================== 2025-09-02 00:46:01.510335 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 41.88s 2025-09-02 00:46:01.510343 | orchestrator | common : Restart fluentd container ------------------------------------- 32.74s 2025-09-02 00:46:01.510351 | orchestrator | common : Restart cron container ----------------------------------------- 9.59s 2025-09-02 00:46:01.510359 | orchestrator | common : Copying over config.json files for services -------------------- 9.11s 2025-09-02 00:46:01.510367 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.63s 2025-09-02 00:46:01.510375 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.52s 2025-09-02 00:46:01.510383 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.01s 2025-09-02 00:46:01.510391 | orchestrator | common : Check common containers ---------------------------------------- 4.75s 2025-09-02 00:46:01.510399 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.45s 2025-09-02 00:46:01.510407 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.07s 2025-09-02 00:46:01.510415 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.24s 2025-09-02 00:46:01.510423 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.07s 2025-09-02 00:46:01.510431 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.64s 2025-09-02 00:46:01.510439 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.54s 2025-09-02 00:46:01.510447 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.13s 2025-09-02 00:46:01.510455 | orchestrator | common : Find custom fluentd filter config files ------------------------ 2.07s 2025-09-02 00:46:01.510462 | orchestrator | common : Creating log volume -------------------------------------------- 1.93s 2025-09-02 00:46:01.510470 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.84s 2025-09-02 00:46:01.510478 | orchestrator | common : include_tasks -------------------------------------------------- 1.75s 2025-09-02 00:46:01.510494 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.70s 2025-09-02 00:46:01.510502 | orchestrator | 2025-09-02 00:46:01 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:01.510510 | orchestrator | 2025-09-02 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:04.537745 | orchestrator | 2025-09-02 00:46:04 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:04.539340 | orchestrator | 2025-09-02 00:46:04 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:04.540303 | orchestrator | 2025-09-02 00:46:04 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:04.542309 | orchestrator | 2025-09-02 00:46:04 | INFO  | Task 749bc552-c95d-4fd8-a314-3a65d75bc93c is in state STARTED 2025-09-02 00:46:04.543299 | orchestrator | 2025-09-02 00:46:04 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:04.544327 | orchestrator | 2025-09-02 00:46:04 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:04.545014 | orchestrator | 2025-09-02 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:07.573691 | orchestrator | 2025-09-02 00:46:07 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:07.573956 | orchestrator | 2025-09-02 00:46:07 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:07.574655 | orchestrator | 2025-09-02 00:46:07 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:07.575377 | orchestrator | 2025-09-02 00:46:07 | INFO  | Task 749bc552-c95d-4fd8-a314-3a65d75bc93c is in state STARTED 2025-09-02 00:46:07.576628 | orchestrator | 2025-09-02 00:46:07 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:07.577418 | orchestrator | 2025-09-02 00:46:07 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:07.577448 | orchestrator | 2025-09-02 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:10.609653 | orchestrator | 2025-09-02 00:46:10 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:10.610509 | orchestrator | 2025-09-02 00:46:10 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:10.611354 | orchestrator | 2025-09-02 00:46:10 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:10.612217 | orchestrator | 2025-09-02 00:46:10 | INFO  | Task 749bc552-c95d-4fd8-a314-3a65d75bc93c is in state STARTED 2025-09-02 00:46:10.613219 | orchestrator | 2025-09-02 00:46:10 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:10.614200 | orchestrator | 2025-09-02 00:46:10 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:10.614277 | orchestrator | 2025-09-02 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:13.669052 | orchestrator | 2025-09-02 00:46:13 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:13.670327 | orchestrator | 2025-09-02 00:46:13 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:13.672272 | orchestrator | 2025-09-02 00:46:13 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:13.696602 | orchestrator | 2025-09-02 00:46:13 | INFO  | Task 749bc552-c95d-4fd8-a314-3a65d75bc93c is in state STARTED 2025-09-02 00:46:13.696684 | orchestrator | 2025-09-02 00:46:13 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:13.696698 | orchestrator | 2025-09-02 00:46:13 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:13.696711 | orchestrator | 2025-09-02 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:16.740015 | orchestrator | 2025-09-02 00:46:16 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:16.740108 | orchestrator | 2025-09-02 00:46:16 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:16.740456 | orchestrator | 2025-09-02 00:46:16 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:16.741898 | orchestrator | 2025-09-02 00:46:16 | INFO  | Task 749bc552-c95d-4fd8-a314-3a65d75bc93c is in state STARTED 2025-09-02 00:46:16.742751 | orchestrator | 2025-09-02 00:46:16 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:16.743738 | orchestrator | 2025-09-02 00:46:16 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:16.743764 | orchestrator | 2025-09-02 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:19.794999 | orchestrator | 2025-09-02 00:46:19 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:19.796876 | orchestrator | 2025-09-02 00:46:19 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:19.796911 | orchestrator | 2025-09-02 00:46:19 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:19.796923 | orchestrator | 2025-09-02 00:46:19 | INFO  | Task 749bc552-c95d-4fd8-a314-3a65d75bc93c is in state STARTED 2025-09-02 00:46:19.797552 | orchestrator | 2025-09-02 00:46:19 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:19.799067 | orchestrator | 2025-09-02 00:46:19 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:19.799092 | orchestrator | 2025-09-02 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:22.868161 | orchestrator | 2025-09-02 00:46:22 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:22.869514 | orchestrator | 2025-09-02 00:46:22 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:22.870755 | orchestrator | 2025-09-02 00:46:22 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:22.872804 | orchestrator | 2025-09-02 00:46:22 | INFO  | Task 749bc552-c95d-4fd8-a314-3a65d75bc93c is in state SUCCESS 2025-09-02 00:46:22.873914 | orchestrator | 2025-09-02 00:46:22 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:22.874897 | orchestrator | 2025-09-02 00:46:22 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:22.876900 | orchestrator | 2025-09-02 00:46:22 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:22.877066 | orchestrator | 2025-09-02 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:25.959987 | orchestrator | 2025-09-02 00:46:25 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:25.960064 | orchestrator | 2025-09-02 00:46:25 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:25.960077 | orchestrator | 2025-09-02 00:46:25 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:25.960109 | orchestrator | 2025-09-02 00:46:25 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:25.960144 | orchestrator | 2025-09-02 00:46:25 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:25.960156 | orchestrator | 2025-09-02 00:46:25 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:25.960167 | orchestrator | 2025-09-02 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:28.978374 | orchestrator | 2025-09-02 00:46:28 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:28.978457 | orchestrator | 2025-09-02 00:46:28 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:28.980646 | orchestrator | 2025-09-02 00:46:28 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:28.980669 | orchestrator | 2025-09-02 00:46:28 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:28.980681 | orchestrator | 2025-09-02 00:46:28 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:28.981405 | orchestrator | 2025-09-02 00:46:28 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:28.981427 | orchestrator | 2025-09-02 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:32.055644 | orchestrator | 2025-09-02 00:46:32 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:32.058655 | orchestrator | 2025-09-02 00:46:32 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:32.058690 | orchestrator | 2025-09-02 00:46:32 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:32.058703 | orchestrator | 2025-09-02 00:46:32 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:32.063211 | orchestrator | 2025-09-02 00:46:32 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:32.063620 | orchestrator | 2025-09-02 00:46:32 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state STARTED 2025-09-02 00:46:32.063641 | orchestrator | 2025-09-02 00:46:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:35.142212 | orchestrator | 2025-09-02 00:46:35 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:35.142637 | orchestrator | 2025-09-02 00:46:35 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:35.144036 | orchestrator | 2025-09-02 00:46:35 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:35.145091 | orchestrator | 2025-09-02 00:46:35 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:35.146565 | orchestrator | 2025-09-02 00:46:35 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:35.147714 | orchestrator | 2025-09-02 00:46:35 | INFO  | Task 07881764-27e3-4938-945c-f906ae33f8cf is in state SUCCESS 2025-09-02 00:46:35.148304 | orchestrator | 2025-09-02 00:46:35.148329 | orchestrator | 2025-09-02 00:46:35.148341 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:46:35.148353 | orchestrator | 2025-09-02 00:46:35.148365 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:46:35.148376 | orchestrator | Tuesday 02 September 2025 00:46:04 +0000 (0:00:00.280) 0:00:00.280 ***** 2025-09-02 00:46:35.148387 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:46:35.148400 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:46:35.148411 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:46:35.148450 | orchestrator | 2025-09-02 00:46:35.148462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:46:35.148473 | orchestrator | Tuesday 02 September 2025 00:46:05 +0000 (0:00:00.435) 0:00:00.715 ***** 2025-09-02 00:46:35.148484 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-02 00:46:35.148495 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-02 00:46:35.148506 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-02 00:46:35.148517 | orchestrator | 2025-09-02 00:46:35.148528 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-02 00:46:35.148593 | orchestrator | 2025-09-02 00:46:35.148673 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-02 00:46:35.148688 | orchestrator | Tuesday 02 September 2025 00:46:05 +0000 (0:00:00.514) 0:00:01.230 ***** 2025-09-02 00:46:35.148699 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:46:35.148712 | orchestrator | 2025-09-02 00:46:35.148725 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-02 00:46:35.148737 | orchestrator | Tuesday 02 September 2025 00:46:06 +0000 (0:00:00.737) 0:00:01.967 ***** 2025-09-02 00:46:35.148749 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-02 00:46:35.148761 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-02 00:46:35.148799 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-02 00:46:35.148812 | orchestrator | 2025-09-02 00:46:35.148823 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-02 00:46:35.148833 | orchestrator | Tuesday 02 September 2025 00:46:07 +0000 (0:00:00.795) 0:00:02.763 ***** 2025-09-02 00:46:35.148871 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-02 00:46:35.148882 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-02 00:46:35.148893 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-02 00:46:35.148904 | orchestrator | 2025-09-02 00:46:35.148933 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-02 00:46:35.148947 | orchestrator | Tuesday 02 September 2025 00:46:09 +0000 (0:00:02.392) 0:00:05.155 ***** 2025-09-02 00:46:35.148959 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:35.148972 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:35.148985 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:35.148997 | orchestrator | 2025-09-02 00:46:35.149010 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-02 00:46:35.149022 | orchestrator | Tuesday 02 September 2025 00:46:11 +0000 (0:00:01.926) 0:00:07.082 ***** 2025-09-02 00:46:35.149034 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:35.149046 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:35.149058 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:35.149071 | orchestrator | 2025-09-02 00:46:35.149083 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:46:35.149096 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:46:35.149110 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:46:35.149123 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:46:35.149136 | orchestrator | 2025-09-02 00:46:35.149147 | orchestrator | 2025-09-02 00:46:35.149160 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:46:35.149173 | orchestrator | Tuesday 02 September 2025 00:46:20 +0000 (0:00:08.451) 0:00:15.534 ***** 2025-09-02 00:46:35.149187 | orchestrator | =============================================================================== 2025-09-02 00:46:35.149200 | orchestrator | memcached : Restart memcached container --------------------------------- 8.45s 2025-09-02 00:46:35.149223 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.39s 2025-09-02 00:46:35.149235 | orchestrator | memcached : Check memcached container ----------------------------------- 1.93s 2025-09-02 00:46:35.149248 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.80s 2025-09-02 00:46:35.149260 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.74s 2025-09-02 00:46:35.149273 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-09-02 00:46:35.149286 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-09-02 00:46:35.149296 | orchestrator | 2025-09-02 00:46:35.151687 | orchestrator | 2025-09-02 00:46:35.151735 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:46:35.151747 | orchestrator | 2025-09-02 00:46:35.151758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:46:35.151770 | orchestrator | Tuesday 02 September 2025 00:46:04 +0000 (0:00:00.350) 0:00:00.350 ***** 2025-09-02 00:46:35.151781 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:46:35.151793 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:46:35.151804 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:46:35.151815 | orchestrator | 2025-09-02 00:46:35.151826 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:46:35.151837 | orchestrator | Tuesday 02 September 2025 00:46:04 +0000 (0:00:00.547) 0:00:00.897 ***** 2025-09-02 00:46:35.151906 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-02 00:46:35.151918 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-02 00:46:35.151929 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-02 00:46:35.151940 | orchestrator | 2025-09-02 00:46:35.151950 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-02 00:46:35.151961 | orchestrator | 2025-09-02 00:46:35.151972 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-02 00:46:35.151983 | orchestrator | Tuesday 02 September 2025 00:46:05 +0000 (0:00:00.698) 0:00:01.595 ***** 2025-09-02 00:46:35.151994 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:46:35.152006 | orchestrator | 2025-09-02 00:46:35.152017 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-02 00:46:35.152027 | orchestrator | Tuesday 02 September 2025 00:46:06 +0000 (0:00:00.646) 0:00:02.242 ***** 2025-09-02 00:46:35.152042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152178 | orchestrator | 2025-09-02 00:46:35.152194 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-02 00:46:35.152206 | orchestrator | Tuesday 02 September 2025 00:46:07 +0000 (0:00:01.451) 0:00:03.694 ***** 2025-09-02 00:46:35.152217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152316 | orchestrator | 2025-09-02 00:46:35.152328 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-02 00:46:35.152341 | orchestrator | Tuesday 02 September 2025 00:46:10 +0000 (0:00:02.608) 0:00:06.303 ***** 2025-09-02 00:46:35.152354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152454 | orchestrator | 2025-09-02 00:46:35.152465 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-02 00:46:35.152477 | orchestrator | Tuesday 02 September 2025 00:46:13 +0000 (0:00:03.405) 0:00:09.708 ***** 2025-09-02 00:46:35.152488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-02 00:46:35.152576 | orchestrator | 2025-09-02 00:46:35.152587 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-02 00:46:35.152598 | orchestrator | Tuesday 02 September 2025 00:46:16 +0000 (0:00:02.367) 0:00:12.076 ***** 2025-09-02 00:46:35.152609 | orchestrator | 2025-09-02 00:46:35.152620 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-02 00:46:35.152631 | orchestrator | Tuesday 02 September 2025 00:46:16 +0000 (0:00:00.087) 0:00:12.164 ***** 2025-09-02 00:46:35.152642 | orchestrator | 2025-09-02 00:46:35.152653 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-02 00:46:35.152664 | orchestrator | Tuesday 02 September 2025 00:46:16 +0000 (0:00:00.138) 0:00:12.302 ***** 2025-09-02 00:46:35.152674 | orchestrator | 2025-09-02 00:46:35.152685 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-02 00:46:35.152696 | orchestrator | Tuesday 02 September 2025 00:46:16 +0000 (0:00:00.082) 0:00:12.385 ***** 2025-09-02 00:46:35.152707 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:35.152719 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:35.152730 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:35.152741 | orchestrator | 2025-09-02 00:46:35.152752 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-02 00:46:35.152763 | orchestrator | Tuesday 02 September 2025 00:46:20 +0000 (0:00:04.454) 0:00:16.839 ***** 2025-09-02 00:46:35.152774 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:46:35.152785 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:46:35.152803 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:46:35.152814 | orchestrator | 2025-09-02 00:46:35.152825 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:46:35.152837 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:46:35.152867 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:46:35.152878 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:46:35.152889 | orchestrator | 2025-09-02 00:46:35.152900 | orchestrator | 2025-09-02 00:46:35.152916 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:46:35.152927 | orchestrator | Tuesday 02 September 2025 00:46:31 +0000 (0:00:10.762) 0:00:27.602 ***** 2025-09-02 00:46:35.152938 | orchestrator | =============================================================================== 2025-09-02 00:46:35.152949 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.77s 2025-09-02 00:46:35.152960 | orchestrator | redis : Restart redis container ----------------------------------------- 4.45s 2025-09-02 00:46:35.152971 | orchestrator | redis : Copying over redis config files --------------------------------- 3.41s 2025-09-02 00:46:35.152982 | orchestrator | redis : Copying over default config.json files -------------------------- 2.61s 2025-09-02 00:46:35.152992 | orchestrator | redis : Check redis containers ------------------------------------------ 2.37s 2025-09-02 00:46:35.153003 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.45s 2025-09-02 00:46:35.153014 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-09-02 00:46:35.153025 | orchestrator | redis : include_tasks --------------------------------------------------- 0.65s 2025-09-02 00:46:35.153035 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.55s 2025-09-02 00:46:35.153046 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.31s 2025-09-02 00:46:35.153057 | orchestrator | 2025-09-02 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:38.181185 | orchestrator | 2025-09-02 00:46:38 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:38.181631 | orchestrator | 2025-09-02 00:46:38 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:38.182542 | orchestrator | 2025-09-02 00:46:38 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:38.183372 | orchestrator | 2025-09-02 00:46:38 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:38.184169 | orchestrator | 2025-09-02 00:46:38 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:38.185485 | orchestrator | 2025-09-02 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:41.228976 | orchestrator | 2025-09-02 00:46:41 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:41.231732 | orchestrator | 2025-09-02 00:46:41 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:41.232936 | orchestrator | 2025-09-02 00:46:41 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:41.234296 | orchestrator | 2025-09-02 00:46:41 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:41.235684 | orchestrator | 2025-09-02 00:46:41 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:41.235952 | orchestrator | 2025-09-02 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:44.276121 | orchestrator | 2025-09-02 00:46:44 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:44.276362 | orchestrator | 2025-09-02 00:46:44 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:44.277008 | orchestrator | 2025-09-02 00:46:44 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:44.277469 | orchestrator | 2025-09-02 00:46:44 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:44.278239 | orchestrator | 2025-09-02 00:46:44 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:44.278377 | orchestrator | 2025-09-02 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:47.435702 | orchestrator | 2025-09-02 00:46:47 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:47.435777 | orchestrator | 2025-09-02 00:46:47 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:47.435785 | orchestrator | 2025-09-02 00:46:47 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:47.435791 | orchestrator | 2025-09-02 00:46:47 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:47.435798 | orchestrator | 2025-09-02 00:46:47 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:47.435804 | orchestrator | 2025-09-02 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:50.395404 | orchestrator | 2025-09-02 00:46:50 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:50.395518 | orchestrator | 2025-09-02 00:46:50 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:50.396080 | orchestrator | 2025-09-02 00:46:50 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:50.397716 | orchestrator | 2025-09-02 00:46:50 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:50.398278 | orchestrator | 2025-09-02 00:46:50 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:50.399353 | orchestrator | 2025-09-02 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:53.478993 | orchestrator | 2025-09-02 00:46:53 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:53.479090 | orchestrator | 2025-09-02 00:46:53 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:53.484746 | orchestrator | 2025-09-02 00:46:53 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:53.484774 | orchestrator | 2025-09-02 00:46:53 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:53.484786 | orchestrator | 2025-09-02 00:46:53 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:53.484798 | orchestrator | 2025-09-02 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:46:56.726329 | orchestrator | 2025-09-02 00:46:56 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:46:56.726419 | orchestrator | 2025-09-02 00:46:56 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:46:56.726432 | orchestrator | 2025-09-02 00:46:56 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:46:56.726444 | orchestrator | 2025-09-02 00:46:56 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:46:56.726504 | orchestrator | 2025-09-02 00:46:56 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:46:56.726517 | orchestrator | 2025-09-02 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:00.149293 | orchestrator | 2025-09-02 00:47:00 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:00.149397 | orchestrator | 2025-09-02 00:47:00 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:00.149412 | orchestrator | 2025-09-02 00:47:00 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:47:00.149424 | orchestrator | 2025-09-02 00:47:00 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:47:00.149435 | orchestrator | 2025-09-02 00:47:00 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:00.149447 | orchestrator | 2025-09-02 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:03.360507 | orchestrator | 2025-09-02 00:47:03 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:03.365430 | orchestrator | 2025-09-02 00:47:03 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:03.367581 | orchestrator | 2025-09-02 00:47:03 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:47:03.368729 | orchestrator | 2025-09-02 00:47:03 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:47:03.372534 | orchestrator | 2025-09-02 00:47:03 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:03.372559 | orchestrator | 2025-09-02 00:47:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:06.453997 | orchestrator | 2025-09-02 00:47:06 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:06.454122 | orchestrator | 2025-09-02 00:47:06 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:06.455530 | orchestrator | 2025-09-02 00:47:06 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:47:06.461780 | orchestrator | 2025-09-02 00:47:06 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:47:06.461838 | orchestrator | 2025-09-02 00:47:06 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:06.461850 | orchestrator | 2025-09-02 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:09.521544 | orchestrator | 2025-09-02 00:47:09 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:09.521667 | orchestrator | 2025-09-02 00:47:09 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:09.521680 | orchestrator | 2025-09-02 00:47:09 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:47:09.521754 | orchestrator | 2025-09-02 00:47:09 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state STARTED 2025-09-02 00:47:09.522757 | orchestrator | 2025-09-02 00:47:09 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:09.522780 | orchestrator | 2025-09-02 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:12.746285 | orchestrator | 2025-09-02 00:47:12 | INFO  | Task e3911985-43f1-4b75-9afc-bcdd437e5817 is in state STARTED 2025-09-02 00:47:12.749596 | orchestrator | 2025-09-02 00:47:12 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:12.751469 | orchestrator | 2025-09-02 00:47:12 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:12.753045 | orchestrator | 2025-09-02 00:47:12 | INFO  | Task 98460295-bb67-4953-8ada-e712987b048f is in state STARTED 2025-09-02 00:47:12.754891 | orchestrator | 2025-09-02 00:47:12 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:47:12.758287 | orchestrator | 2025-09-02 00:47:12 | INFO  | Task 6a2c10cf-e95d-4ef3-954c-51a1d8834d8c is in state SUCCESS 2025-09-02 00:47:12.760695 | orchestrator | 2025-09-02 00:47:12.760739 | orchestrator | 2025-09-02 00:47:12.760748 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-02 00:47:12.760757 | orchestrator | 2025-09-02 00:47:12.760766 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-02 00:47:12.760775 | orchestrator | Tuesday 02 September 2025 00:43:24 +0000 (0:00:00.239) 0:00:00.239 ***** 2025-09-02 00:47:12.760782 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:12.760792 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:12.760799 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:12.760806 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.760814 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.760821 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.760828 | orchestrator | 2025-09-02 00:47:12.760836 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-02 00:47:12.760844 | orchestrator | Tuesday 02 September 2025 00:43:25 +0000 (0:00:00.883) 0:00:01.123 ***** 2025-09-02 00:47:12.760851 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.760860 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.760867 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.760875 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.760882 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.760889 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.760896 | orchestrator | 2025-09-02 00:47:12.760904 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-02 00:47:12.760932 | orchestrator | Tuesday 02 September 2025 00:43:26 +0000 (0:00:00.951) 0:00:02.075 ***** 2025-09-02 00:47:12.760941 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.760948 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.760955 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.760962 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.760969 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.760977 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.760984 | orchestrator | 2025-09-02 00:47:12.760991 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-02 00:47:12.760998 | orchestrator | Tuesday 02 September 2025 00:43:27 +0000 (0:00:00.986) 0:00:03.062 ***** 2025-09-02 00:47:12.761006 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:12.761013 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:12.761020 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.761027 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.761035 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.761042 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:12.761049 | orchestrator | 2025-09-02 00:47:12.761056 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-02 00:47:12.761064 | orchestrator | Tuesday 02 September 2025 00:43:30 +0000 (0:00:02.616) 0:00:05.678 ***** 2025-09-02 00:47:12.761071 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:12.761078 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:12.761086 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:12.761093 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.761100 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.761107 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.761114 | orchestrator | 2025-09-02 00:47:12.761122 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-02 00:47:12.761142 | orchestrator | Tuesday 02 September 2025 00:43:31 +0000 (0:00:01.032) 0:00:06.710 ***** 2025-09-02 00:47:12.761150 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:12.761157 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:12.761165 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:12.761172 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.761179 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.761186 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.761194 | orchestrator | 2025-09-02 00:47:12.761201 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-02 00:47:12.761208 | orchestrator | Tuesday 02 September 2025 00:43:32 +0000 (0:00:01.269) 0:00:07.980 ***** 2025-09-02 00:47:12.761216 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.761223 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.761230 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.761237 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.761246 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.761254 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.761263 | orchestrator | 2025-09-02 00:47:12.761277 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-02 00:47:12.761286 | orchestrator | Tuesday 02 September 2025 00:43:33 +0000 (0:00:00.804) 0:00:08.784 ***** 2025-09-02 00:47:12.761294 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.761302 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.761311 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.761319 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.761328 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.761336 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.761344 | orchestrator | 2025-09-02 00:47:12.761353 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-02 00:47:12.761361 | orchestrator | Tuesday 02 September 2025 00:43:34 +0000 (0:00:00.905) 0:00:09.689 ***** 2025-09-02 00:47:12.761370 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-02 00:47:12.761378 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-02 00:47:12.761387 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.761395 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-02 00:47:12.761403 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-02 00:47:12.761412 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.761420 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-02 00:47:12.761429 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-02 00:47:12.761437 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.761445 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-02 00:47:12.761462 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-02 00:47:12.761471 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.761479 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-02 00:47:12.761488 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-02 00:47:12.761496 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.761505 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-02 00:47:12.761513 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-02 00:47:12.761521 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.761529 | orchestrator | 2025-09-02 00:47:12.761537 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-02 00:47:12.761545 | orchestrator | Tuesday 02 September 2025 00:43:35 +0000 (0:00:00.950) 0:00:10.640 ***** 2025-09-02 00:47:12.761554 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.761567 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.761576 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.761584 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.761593 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.761601 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.761608 | orchestrator | 2025-09-02 00:47:12.761615 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-02 00:47:12.761624 | orchestrator | Tuesday 02 September 2025 00:43:36 +0000 (0:00:01.593) 0:00:12.234 ***** 2025-09-02 00:47:12.761632 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:12.761639 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:12.761646 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:12.761654 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.761661 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.761668 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.761675 | orchestrator | 2025-09-02 00:47:12.761683 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-02 00:47:12.761690 | orchestrator | Tuesday 02 September 2025 00:43:37 +0000 (0:00:00.924) 0:00:13.159 ***** 2025-09-02 00:47:12.761697 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:12.761705 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:12.761712 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.761719 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.761726 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.761734 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:12.761741 | orchestrator | 2025-09-02 00:47:12.761748 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-02 00:47:12.761755 | orchestrator | Tuesday 02 September 2025 00:43:43 +0000 (0:00:05.766) 0:00:18.926 ***** 2025-09-02 00:47:12.761763 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.761770 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.761777 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.761784 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.761792 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.761799 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.761806 | orchestrator | 2025-09-02 00:47:12.761813 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-02 00:47:12.761821 | orchestrator | Tuesday 02 September 2025 00:43:45 +0000 (0:00:01.710) 0:00:20.636 ***** 2025-09-02 00:47:12.761828 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.761835 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.761843 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.761850 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.761857 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.761864 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.761871 | orchestrator | 2025-09-02 00:47:12.761879 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-02 00:47:12.761888 | orchestrator | Tuesday 02 September 2025 00:43:48 +0000 (0:00:02.889) 0:00:23.525 ***** 2025-09-02 00:47:12.761896 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:12.761903 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:12.761932 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:12.761940 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.761948 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.761958 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.761966 | orchestrator | 2025-09-02 00:47:12.761973 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-02 00:47:12.761981 | orchestrator | Tuesday 02 September 2025 00:43:49 +0000 (0:00:01.688) 0:00:25.214 ***** 2025-09-02 00:47:12.761988 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-02 00:47:12.761996 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-02 00:47:12.762003 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-02 00:47:12.762057 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-02 00:47:12.762067 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-02 00:47:12.762075 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-02 00:47:12.762083 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-02 00:47:12.762090 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-02 00:47:12.762097 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-02 00:47:12.762104 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-02 00:47:12.762112 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-02 00:47:12.762120 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-02 00:47:12.762127 | orchestrator | 2025-09-02 00:47:12.762135 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-02 00:47:12.762142 | orchestrator | Tuesday 02 September 2025 00:43:52 +0000 (0:00:02.784) 0:00:27.998 ***** 2025-09-02 00:47:12.762150 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:12.762157 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:12.762164 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:12.762172 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.762179 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.762187 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.762194 | orchestrator | 2025-09-02 00:47:12.762206 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-02 00:47:12.762214 | orchestrator | 2025-09-02 00:47:12.762222 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-02 00:47:12.762229 | orchestrator | Tuesday 02 September 2025 00:43:54 +0000 (0:00:02.316) 0:00:30.315 ***** 2025-09-02 00:47:12.762236 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.762244 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.762251 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.762258 | orchestrator | 2025-09-02 00:47:12.762266 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-02 00:47:12.762273 | orchestrator | Tuesday 02 September 2025 00:43:56 +0000 (0:00:01.392) 0:00:31.708 ***** 2025-09-02 00:47:12.762280 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.762288 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.762295 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.762302 | orchestrator | 2025-09-02 00:47:12.762309 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-02 00:47:12.762317 | orchestrator | Tuesday 02 September 2025 00:43:57 +0000 (0:00:01.532) 0:00:33.241 ***** 2025-09-02 00:47:12.762324 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.762331 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.762338 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.762346 | orchestrator | 2025-09-02 00:47:12.762353 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-02 00:47:12.762360 | orchestrator | Tuesday 02 September 2025 00:43:58 +0000 (0:00:01.073) 0:00:34.314 ***** 2025-09-02 00:47:12.762368 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.762375 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.762382 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.762390 | orchestrator | 2025-09-02 00:47:12.762397 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-02 00:47:12.762404 | orchestrator | Tuesday 02 September 2025 00:44:00 +0000 (0:00:01.631) 0:00:35.946 ***** 2025-09-02 00:47:12.762412 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.762419 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.762426 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.762433 | orchestrator | 2025-09-02 00:47:12.762441 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-02 00:47:12.762448 | orchestrator | Tuesday 02 September 2025 00:44:01 +0000 (0:00:00.654) 0:00:36.600 ***** 2025-09-02 00:47:12.762455 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.762469 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.762476 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.762483 | orchestrator | 2025-09-02 00:47:12.762491 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-02 00:47:12.762498 | orchestrator | Tuesday 02 September 2025 00:44:02 +0000 (0:00:01.305) 0:00:37.906 ***** 2025-09-02 00:47:12.762506 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.762513 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.762520 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.762528 | orchestrator | 2025-09-02 00:47:12.762535 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-02 00:47:12.762542 | orchestrator | Tuesday 02 September 2025 00:44:03 +0000 (0:00:01.444) 0:00:39.350 ***** 2025-09-02 00:47:12.762550 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:47:12.762557 | orchestrator | 2025-09-02 00:47:12.762565 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-02 00:47:12.762572 | orchestrator | Tuesday 02 September 2025 00:44:04 +0000 (0:00:01.033) 0:00:40.384 ***** 2025-09-02 00:47:12.762579 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.762587 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.762594 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.762601 | orchestrator | 2025-09-02 00:47:12.762608 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-02 00:47:12.762616 | orchestrator | Tuesday 02 September 2025 00:44:07 +0000 (0:00:02.922) 0:00:43.306 ***** 2025-09-02 00:47:12.762623 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.762631 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.762638 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.762645 | orchestrator | 2025-09-02 00:47:12.762653 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-02 00:47:12.762660 | orchestrator | Tuesday 02 September 2025 00:44:08 +0000 (0:00:00.718) 0:00:44.025 ***** 2025-09-02 00:47:12.762668 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.762675 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.762683 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.762690 | orchestrator | 2025-09-02 00:47:12.762698 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-02 00:47:12.762712 | orchestrator | Tuesday 02 September 2025 00:44:09 +0000 (0:00:00.952) 0:00:44.977 ***** 2025-09-02 00:47:12.762720 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.762727 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.762735 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.762742 | orchestrator | 2025-09-02 00:47:12.762749 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-02 00:47:12.762756 | orchestrator | Tuesday 02 September 2025 00:44:11 +0000 (0:00:02.309) 0:00:47.286 ***** 2025-09-02 00:47:12.762764 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.762771 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.762778 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.762785 | orchestrator | 2025-09-02 00:47:12.762793 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-02 00:47:12.762800 | orchestrator | Tuesday 02 September 2025 00:44:12 +0000 (0:00:00.365) 0:00:47.652 ***** 2025-09-02 00:47:12.762807 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.762815 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.762822 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.762829 | orchestrator | 2025-09-02 00:47:12.762837 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-02 00:47:12.762844 | orchestrator | Tuesday 02 September 2025 00:44:12 +0000 (0:00:00.452) 0:00:48.104 ***** 2025-09-02 00:47:12.762851 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.762859 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.762866 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.762878 | orchestrator | 2025-09-02 00:47:12.762889 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-02 00:47:12.762897 | orchestrator | Tuesday 02 September 2025 00:44:14 +0000 (0:00:01.890) 0:00:49.995 ***** 2025-09-02 00:47:12.762905 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-02 00:47:12.762928 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-02 00:47:12.762935 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-02 00:47:12.762943 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-02 00:47:12.762950 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-02 00:47:12.762958 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-02 00:47:12.762965 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-02 00:47:12.762973 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-02 00:47:12.762980 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-02 00:47:12.762987 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-02 00:47:12.762994 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-02 00:47:12.763002 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-02 00:47:12.763009 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.763016 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.763024 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.763031 | orchestrator | 2025-09-02 00:47:12.763038 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-02 00:47:12.763046 | orchestrator | Tuesday 02 September 2025 00:44:59 +0000 (0:00:44.879) 0:01:34.875 ***** 2025-09-02 00:47:12.763053 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.763060 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.763067 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.763075 | orchestrator | 2025-09-02 00:47:12.763082 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-02 00:47:12.763089 | orchestrator | Tuesday 02 September 2025 00:44:59 +0000 (0:00:00.303) 0:01:35.178 ***** 2025-09-02 00:47:12.763096 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.763104 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.763111 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.763118 | orchestrator | 2025-09-02 00:47:12.763126 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-02 00:47:12.763133 | orchestrator | Tuesday 02 September 2025 00:45:00 +0000 (0:00:01.184) 0:01:36.363 ***** 2025-09-02 00:47:12.763140 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.763151 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.763158 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.763165 | orchestrator | 2025-09-02 00:47:12.763173 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-02 00:47:12.763180 | orchestrator | Tuesday 02 September 2025 00:45:02 +0000 (0:00:01.295) 0:01:37.658 ***** 2025-09-02 00:47:12.763191 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.763198 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.763206 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.763213 | orchestrator | 2025-09-02 00:47:12.763220 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-02 00:47:12.763228 | orchestrator | Tuesday 02 September 2025 00:45:27 +0000 (0:00:25.603) 0:02:03.261 ***** 2025-09-02 00:47:12.763235 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.763242 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.763249 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.763256 | orchestrator | 2025-09-02 00:47:12.763264 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-02 00:47:12.763271 | orchestrator | Tuesday 02 September 2025 00:45:28 +0000 (0:00:00.849) 0:02:04.110 ***** 2025-09-02 00:47:12.763278 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.763286 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.763293 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.763300 | orchestrator | 2025-09-02 00:47:12.763307 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-02 00:47:12.763315 | orchestrator | Tuesday 02 September 2025 00:45:29 +0000 (0:00:00.806) 0:02:04.917 ***** 2025-09-02 00:47:12.763322 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.763329 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.763337 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.763344 | orchestrator | 2025-09-02 00:47:12.763351 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-02 00:47:12.763359 | orchestrator | Tuesday 02 September 2025 00:45:30 +0000 (0:00:00.898) 0:02:05.815 ***** 2025-09-02 00:47:12.763366 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.763377 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.763385 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.763392 | orchestrator | 2025-09-02 00:47:12.763399 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-02 00:47:12.763407 | orchestrator | Tuesday 02 September 2025 00:45:31 +0000 (0:00:01.060) 0:02:06.876 ***** 2025-09-02 00:47:12.763414 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.763421 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.763428 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.763436 | orchestrator | 2025-09-02 00:47:12.763443 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-02 00:47:12.763450 | orchestrator | Tuesday 02 September 2025 00:45:31 +0000 (0:00:00.381) 0:02:07.257 ***** 2025-09-02 00:47:12.763458 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.763465 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.763472 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.763479 | orchestrator | 2025-09-02 00:47:12.763487 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-02 00:47:12.763494 | orchestrator | Tuesday 02 September 2025 00:45:32 +0000 (0:00:00.647) 0:02:07.905 ***** 2025-09-02 00:47:12.763501 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.763509 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.763516 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.763523 | orchestrator | 2025-09-02 00:47:12.763530 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-02 00:47:12.763538 | orchestrator | Tuesday 02 September 2025 00:45:33 +0000 (0:00:00.647) 0:02:08.552 ***** 2025-09-02 00:47:12.763545 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.763552 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.763559 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.763567 | orchestrator | 2025-09-02 00:47:12.763574 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-02 00:47:12.763581 | orchestrator | Tuesday 02 September 2025 00:45:34 +0000 (0:00:01.125) 0:02:09.678 ***** 2025-09-02 00:47:12.763589 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:12.763600 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:12.763608 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:12.763615 | orchestrator | 2025-09-02 00:47:12.763622 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-02 00:47:12.763629 | orchestrator | Tuesday 02 September 2025 00:45:35 +0000 (0:00:00.837) 0:02:10.515 ***** 2025-09-02 00:47:12.763637 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.763644 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.763651 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.763658 | orchestrator | 2025-09-02 00:47:12.763666 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-02 00:47:12.763673 | orchestrator | Tuesday 02 September 2025 00:45:35 +0000 (0:00:00.330) 0:02:10.846 ***** 2025-09-02 00:47:12.763680 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.763687 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.763695 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.763702 | orchestrator | 2025-09-02 00:47:12.763709 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-02 00:47:12.763716 | orchestrator | Tuesday 02 September 2025 00:45:35 +0000 (0:00:00.279) 0:02:11.126 ***** 2025-09-02 00:47:12.763724 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.763731 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.763738 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.763749 | orchestrator | 2025-09-02 00:47:12.763764 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-02 00:47:12.763776 | orchestrator | Tuesday 02 September 2025 00:45:36 +0000 (0:00:00.861) 0:02:11.987 ***** 2025-09-02 00:47:12.763787 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.763799 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.763811 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.763825 | orchestrator | 2025-09-02 00:47:12.763840 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-02 00:47:12.763852 | orchestrator | Tuesday 02 September 2025 00:45:37 +0000 (0:00:00.630) 0:02:12.618 ***** 2025-09-02 00:47:12.763868 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-02 00:47:12.763879 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-02 00:47:12.763892 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-02 00:47:12.763904 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-02 00:47:12.763968 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-02 00:47:12.763976 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-02 00:47:12.763984 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-02 00:47:12.763991 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-02 00:47:12.763998 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-02 00:47:12.764006 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-02 00:47:12.764013 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-02 00:47:12.764021 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-02 00:47:12.764028 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-02 00:47:12.764040 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-02 00:47:12.764048 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-02 00:47:12.764063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-02 00:47:12.764071 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-02 00:47:12.764078 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-02 00:47:12.764085 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-02 00:47:12.764093 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-02 00:47:12.764100 | orchestrator | 2025-09-02 00:47:12.764107 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-02 00:47:12.764115 | orchestrator | 2025-09-02 00:47:12.764122 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-02 00:47:12.764129 | orchestrator | Tuesday 02 September 2025 00:45:40 +0000 (0:00:03.321) 0:02:15.939 ***** 2025-09-02 00:47:12.764137 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:12.764144 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:12.764152 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:12.764159 | orchestrator | 2025-09-02 00:47:12.764166 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-02 00:47:12.764173 | orchestrator | Tuesday 02 September 2025 00:45:40 +0000 (0:00:00.477) 0:02:16.417 ***** 2025-09-02 00:47:12.764181 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:12.764188 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:12.764196 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:12.764203 | orchestrator | 2025-09-02 00:47:12.764210 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-02 00:47:12.764218 | orchestrator | Tuesday 02 September 2025 00:45:41 +0000 (0:00:00.758) 0:02:17.176 ***** 2025-09-02 00:47:12.764225 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:12.764232 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:12.764239 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:12.764247 | orchestrator | 2025-09-02 00:47:12.764254 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-02 00:47:12.764261 | orchestrator | Tuesday 02 September 2025 00:45:42 +0000 (0:00:00.330) 0:02:17.506 ***** 2025-09-02 00:47:12.764269 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:47:12.764277 | orchestrator | 2025-09-02 00:47:12.764284 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-02 00:47:12.764291 | orchestrator | Tuesday 02 September 2025 00:45:42 +0000 (0:00:00.653) 0:02:18.160 ***** 2025-09-02 00:47:12.764298 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.764306 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.764313 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.764320 | orchestrator | 2025-09-02 00:47:12.764328 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-02 00:47:12.764335 | orchestrator | Tuesday 02 September 2025 00:45:43 +0000 (0:00:00.297) 0:02:18.457 ***** 2025-09-02 00:47:12.764343 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.764350 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.764357 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.764364 | orchestrator | 2025-09-02 00:47:12.764372 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-02 00:47:12.764379 | orchestrator | Tuesday 02 September 2025 00:45:43 +0000 (0:00:00.317) 0:02:18.774 ***** 2025-09-02 00:47:12.764386 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.764394 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.764401 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.764408 | orchestrator | 2025-09-02 00:47:12.764416 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-02 00:47:12.764423 | orchestrator | Tuesday 02 September 2025 00:45:43 +0000 (0:00:00.314) 0:02:19.089 ***** 2025-09-02 00:47:12.764435 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:12.764450 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:12.764457 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:12.764464 | orchestrator | 2025-09-02 00:47:12.764472 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-02 00:47:12.764479 | orchestrator | Tuesday 02 September 2025 00:45:44 +0000 (0:00:00.852) 0:02:19.942 ***** 2025-09-02 00:47:12.764487 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:12.764494 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:12.764501 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:12.764509 | orchestrator | 2025-09-02 00:47:12.764524 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-02 00:47:12.764536 | orchestrator | Tuesday 02 September 2025 00:45:45 +0000 (0:00:01.211) 0:02:21.153 ***** 2025-09-02 00:47:12.764548 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:12.764560 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:12.764572 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:12.764583 | orchestrator | 2025-09-02 00:47:12.764594 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-02 00:47:12.764605 | orchestrator | Tuesday 02 September 2025 00:45:46 +0000 (0:00:01.291) 0:02:22.444 ***** 2025-09-02 00:47:12.764616 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:12.764627 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:12.764638 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:12.764650 | orchestrator | 2025-09-02 00:47:12.764662 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-02 00:47:12.764670 | orchestrator | 2025-09-02 00:47:12.764677 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-02 00:47:12.764685 | orchestrator | Tuesday 02 September 2025 00:45:59 +0000 (0:00:12.136) 0:02:34.581 ***** 2025-09-02 00:47:12.764692 | orchestrator | ok: [testbed-manager] 2025-09-02 00:47:12.764699 | orchestrator | 2025-09-02 00:47:12.764707 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-02 00:47:12.764714 | orchestrator | Tuesday 02 September 2025 00:46:00 +0000 (0:00:00.924) 0:02:35.506 ***** 2025-09-02 00:47:12.764725 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:12.764733 | orchestrator | 2025-09-02 00:47:12.764740 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-02 00:47:12.764748 | orchestrator | Tuesday 02 September 2025 00:46:00 +0000 (0:00:00.495) 0:02:36.001 ***** 2025-09-02 00:47:12.764755 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-02 00:47:12.764763 | orchestrator | 2025-09-02 00:47:12.764770 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-02 00:47:12.764777 | orchestrator | Tuesday 02 September 2025 00:46:01 +0000 (0:00:00.556) 0:02:36.558 ***** 2025-09-02 00:47:12.764785 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:12.764792 | orchestrator | 2025-09-02 00:47:12.764799 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-02 00:47:12.764807 | orchestrator | Tuesday 02 September 2025 00:46:02 +0000 (0:00:01.206) 0:02:37.765 ***** 2025-09-02 00:47:12.764814 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:12.764822 | orchestrator | 2025-09-02 00:47:12.764829 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-02 00:47:12.764836 | orchestrator | Tuesday 02 September 2025 00:46:03 +0000 (0:00:00.689) 0:02:38.455 ***** 2025-09-02 00:47:12.764844 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-02 00:47:12.764851 | orchestrator | 2025-09-02 00:47:12.764859 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-02 00:47:12.764866 | orchestrator | Tuesday 02 September 2025 00:46:04 +0000 (0:00:01.726) 0:02:40.181 ***** 2025-09-02 00:47:12.764873 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-02 00:47:12.764881 | orchestrator | 2025-09-02 00:47:12.764888 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-02 00:47:12.764901 | orchestrator | Tuesday 02 September 2025 00:46:05 +0000 (0:00:00.826) 0:02:41.008 ***** 2025-09-02 00:47:12.764908 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:12.764964 | orchestrator | 2025-09-02 00:47:12.764975 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-02 00:47:12.764985 | orchestrator | Tuesday 02 September 2025 00:46:05 +0000 (0:00:00.341) 0:02:41.350 ***** 2025-09-02 00:47:12.764996 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:12.765007 | orchestrator | 2025-09-02 00:47:12.765019 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-02 00:47:12.765026 | orchestrator | 2025-09-02 00:47:12.765034 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-02 00:47:12.765041 | orchestrator | Tuesday 02 September 2025 00:46:06 +0000 (0:00:00.516) 0:02:41.867 ***** 2025-09-02 00:47:12.765048 | orchestrator | ok: [testbed-manager] 2025-09-02 00:47:12.765056 | orchestrator | 2025-09-02 00:47:12.765063 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-02 00:47:12.765070 | orchestrator | Tuesday 02 September 2025 00:46:06 +0000 (0:00:00.131) 0:02:41.998 ***** 2025-09-02 00:47:12.765078 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-02 00:47:12.765085 | orchestrator | 2025-09-02 00:47:12.765092 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-02 00:47:12.765100 | orchestrator | Tuesday 02 September 2025 00:46:06 +0000 (0:00:00.184) 0:02:42.182 ***** 2025-09-02 00:47:12.765107 | orchestrator | ok: [testbed-manager] 2025-09-02 00:47:12.765114 | orchestrator | 2025-09-02 00:47:12.765122 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-02 00:47:12.765129 | orchestrator | Tuesday 02 September 2025 00:46:07 +0000 (0:00:00.696) 0:02:42.878 ***** 2025-09-02 00:47:12.765137 | orchestrator | ok: [testbed-manager] 2025-09-02 00:47:12.765144 | orchestrator | 2025-09-02 00:47:12.765151 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-02 00:47:12.765159 | orchestrator | Tuesday 02 September 2025 00:46:09 +0000 (0:00:01.706) 0:02:44.584 ***** 2025-09-02 00:47:12.765166 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:12.765173 | orchestrator | 2025-09-02 00:47:12.765180 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-02 00:47:12.765188 | orchestrator | Tuesday 02 September 2025 00:46:09 +0000 (0:00:00.696) 0:02:45.281 ***** 2025-09-02 00:47:12.765200 | orchestrator | ok: [testbed-manager] 2025-09-02 00:47:12.765207 | orchestrator | 2025-09-02 00:47:12.765215 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-02 00:47:12.765222 | orchestrator | Tuesday 02 September 2025 00:46:10 +0000 (0:00:00.565) 0:02:45.847 ***** 2025-09-02 00:47:12.765229 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:12.765237 | orchestrator | 2025-09-02 00:47:12.765244 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-02 00:47:12.765251 | orchestrator | Tuesday 02 September 2025 00:46:19 +0000 (0:00:09.117) 0:02:54.965 ***** 2025-09-02 00:47:12.765259 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:12.765266 | orchestrator | 2025-09-02 00:47:12.765273 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-02 00:47:12.765281 | orchestrator | Tuesday 02 September 2025 00:46:35 +0000 (0:00:15.488) 0:03:10.454 ***** 2025-09-02 00:47:12.765288 | orchestrator | ok: [testbed-manager] 2025-09-02 00:47:12.765295 | orchestrator | 2025-09-02 00:47:12.765303 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-02 00:47:12.765310 | orchestrator | 2025-09-02 00:47:12.765317 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-02 00:47:12.765325 | orchestrator | Tuesday 02 September 2025 00:46:35 +0000 (0:00:00.640) 0:03:11.095 ***** 2025-09-02 00:47:12.765332 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.765339 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.765347 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.765359 | orchestrator | 2025-09-02 00:47:12.765367 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-02 00:47:12.765374 | orchestrator | Tuesday 02 September 2025 00:46:35 +0000 (0:00:00.342) 0:03:11.437 ***** 2025-09-02 00:47:12.765381 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765389 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.765396 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.765403 | orchestrator | 2025-09-02 00:47:12.765416 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-02 00:47:12.765423 | orchestrator | Tuesday 02 September 2025 00:46:36 +0000 (0:00:00.354) 0:03:11.792 ***** 2025-09-02 00:47:12.765431 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:47:12.765438 | orchestrator | 2025-09-02 00:47:12.765445 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-02 00:47:12.765453 | orchestrator | Tuesday 02 September 2025 00:46:37 +0000 (0:00:00.793) 0:03:12.586 ***** 2025-09-02 00:47:12.765460 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765467 | orchestrator | 2025-09-02 00:47:12.765475 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-02 00:47:12.765482 | orchestrator | Tuesday 02 September 2025 00:46:37 +0000 (0:00:00.216) 0:03:12.803 ***** 2025-09-02 00:47:12.765489 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765496 | orchestrator | 2025-09-02 00:47:12.765504 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-02 00:47:12.765511 | orchestrator | Tuesday 02 September 2025 00:46:37 +0000 (0:00:00.304) 0:03:13.107 ***** 2025-09-02 00:47:12.765519 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765526 | orchestrator | 2025-09-02 00:47:12.765533 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-02 00:47:12.765541 | orchestrator | Tuesday 02 September 2025 00:46:37 +0000 (0:00:00.228) 0:03:13.336 ***** 2025-09-02 00:47:12.765548 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765555 | orchestrator | 2025-09-02 00:47:12.765562 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-02 00:47:12.765570 | orchestrator | Tuesday 02 September 2025 00:46:38 +0000 (0:00:00.238) 0:03:13.575 ***** 2025-09-02 00:47:12.765577 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765584 | orchestrator | 2025-09-02 00:47:12.765592 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-02 00:47:12.765599 | orchestrator | Tuesday 02 September 2025 00:46:38 +0000 (0:00:00.207) 0:03:13.782 ***** 2025-09-02 00:47:12.765607 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765614 | orchestrator | 2025-09-02 00:47:12.765621 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-02 00:47:12.765629 | orchestrator | Tuesday 02 September 2025 00:46:38 +0000 (0:00:00.199) 0:03:13.981 ***** 2025-09-02 00:47:12.765636 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765643 | orchestrator | 2025-09-02 00:47:12.765651 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-02 00:47:12.765658 | orchestrator | Tuesday 02 September 2025 00:46:38 +0000 (0:00:00.214) 0:03:14.195 ***** 2025-09-02 00:47:12.765665 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765673 | orchestrator | 2025-09-02 00:47:12.765680 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-02 00:47:12.765687 | orchestrator | Tuesday 02 September 2025 00:46:38 +0000 (0:00:00.203) 0:03:14.399 ***** 2025-09-02 00:47:12.765694 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765702 | orchestrator | 2025-09-02 00:47:12.765709 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-02 00:47:12.765716 | orchestrator | Tuesday 02 September 2025 00:46:39 +0000 (0:00:00.202) 0:03:14.601 ***** 2025-09-02 00:47:12.765724 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-02 00:47:12.765731 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-02 00:47:12.765747 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765755 | orchestrator | 2025-09-02 00:47:12.765762 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-02 00:47:12.765769 | orchestrator | Tuesday 02 September 2025 00:46:40 +0000 (0:00:00.927) 0:03:15.529 ***** 2025-09-02 00:47:12.765776 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765784 | orchestrator | 2025-09-02 00:47:12.765791 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-02 00:47:12.765799 | orchestrator | Tuesday 02 September 2025 00:46:40 +0000 (0:00:00.301) 0:03:15.830 ***** 2025-09-02 00:47:12.765806 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765813 | orchestrator | 2025-09-02 00:47:12.765824 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-02 00:47:12.765831 | orchestrator | Tuesday 02 September 2025 00:46:40 +0000 (0:00:00.208) 0:03:16.039 ***** 2025-09-02 00:47:12.765838 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765846 | orchestrator | 2025-09-02 00:47:12.765853 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-02 00:47:12.765861 | orchestrator | Tuesday 02 September 2025 00:46:40 +0000 (0:00:00.213) 0:03:16.253 ***** 2025-09-02 00:47:12.765868 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765875 | orchestrator | 2025-09-02 00:47:12.765883 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-02 00:47:12.765890 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:00.222) 0:03:16.475 ***** 2025-09-02 00:47:12.765897 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765905 | orchestrator | 2025-09-02 00:47:12.765928 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-02 00:47:12.765937 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:00.223) 0:03:16.699 ***** 2025-09-02 00:47:12.765945 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765952 | orchestrator | 2025-09-02 00:47:12.765959 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-02 00:47:12.765967 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:00.236) 0:03:16.935 ***** 2025-09-02 00:47:12.765974 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.765981 | orchestrator | 2025-09-02 00:47:12.765988 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-02 00:47:12.765996 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:00.218) 0:03:17.153 ***** 2025-09-02 00:47:12.766003 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766010 | orchestrator | 2025-09-02 00:47:12.766054 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-02 00:47:12.766066 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:00.229) 0:03:17.382 ***** 2025-09-02 00:47:12.766074 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766082 | orchestrator | 2025-09-02 00:47:12.766089 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-02 00:47:12.766097 | orchestrator | Tuesday 02 September 2025 00:46:42 +0000 (0:00:00.203) 0:03:17.586 ***** 2025-09-02 00:47:12.766104 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766111 | orchestrator | 2025-09-02 00:47:12.766119 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-02 00:47:12.766126 | orchestrator | Tuesday 02 September 2025 00:46:42 +0000 (0:00:00.236) 0:03:17.823 ***** 2025-09-02 00:47:12.766133 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766141 | orchestrator | 2025-09-02 00:47:12.766148 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-02 00:47:12.766156 | orchestrator | Tuesday 02 September 2025 00:46:42 +0000 (0:00:00.232) 0:03:18.055 ***** 2025-09-02 00:47:12.766163 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-02 00:47:12.766170 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-02 00:47:12.766178 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-02 00:47:12.766191 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-02 00:47:12.766198 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766206 | orchestrator | 2025-09-02 00:47:12.766213 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-02 00:47:12.766221 | orchestrator | Tuesday 02 September 2025 00:46:43 +0000 (0:00:00.981) 0:03:19.036 ***** 2025-09-02 00:47:12.766228 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766235 | orchestrator | 2025-09-02 00:47:12.766243 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-02 00:47:12.766250 | orchestrator | Tuesday 02 September 2025 00:46:43 +0000 (0:00:00.216) 0:03:19.253 ***** 2025-09-02 00:47:12.766257 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766274 | orchestrator | 2025-09-02 00:47:12.766282 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-02 00:47:12.766289 | orchestrator | Tuesday 02 September 2025 00:46:44 +0000 (0:00:00.204) 0:03:19.457 ***** 2025-09-02 00:47:12.766305 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766312 | orchestrator | 2025-09-02 00:47:12.766320 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-02 00:47:12.766327 | orchestrator | Tuesday 02 September 2025 00:46:44 +0000 (0:00:00.233) 0:03:19.691 ***** 2025-09-02 00:47:12.766334 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766341 | orchestrator | 2025-09-02 00:47:12.766348 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-02 00:47:12.766356 | orchestrator | Tuesday 02 September 2025 00:46:44 +0000 (0:00:00.256) 0:03:19.948 ***** 2025-09-02 00:47:12.766363 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-02 00:47:12.766370 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-02 00:47:12.766377 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766385 | orchestrator | 2025-09-02 00:47:12.766392 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-02 00:47:12.766399 | orchestrator | Tuesday 02 September 2025 00:46:44 +0000 (0:00:00.299) 0:03:20.247 ***** 2025-09-02 00:47:12.766406 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.766414 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.766421 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.766428 | orchestrator | 2025-09-02 00:47:12.766435 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-02 00:47:12.766442 | orchestrator | Tuesday 02 September 2025 00:46:45 +0000 (0:00:00.612) 0:03:20.859 ***** 2025-09-02 00:47:12.766450 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.766457 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.766464 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.766471 | orchestrator | 2025-09-02 00:47:12.766482 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-02 00:47:12.766489 | orchestrator | 2025-09-02 00:47:12.766496 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-02 00:47:12.766504 | orchestrator | Tuesday 02 September 2025 00:46:46 +0000 (0:00:01.057) 0:03:21.916 ***** 2025-09-02 00:47:12.766511 | orchestrator | ok: [testbed-manager] 2025-09-02 00:47:12.766518 | orchestrator | 2025-09-02 00:47:12.766526 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-02 00:47:12.766533 | orchestrator | Tuesday 02 September 2025 00:46:46 +0000 (0:00:00.134) 0:03:22.051 ***** 2025-09-02 00:47:12.766540 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-02 00:47:12.766547 | orchestrator | 2025-09-02 00:47:12.766555 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-02 00:47:12.766562 | orchestrator | Tuesday 02 September 2025 00:46:46 +0000 (0:00:00.208) 0:03:22.259 ***** 2025-09-02 00:47:12.766569 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:12.766582 | orchestrator | 2025-09-02 00:47:12.766589 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-02 00:47:12.766596 | orchestrator | 2025-09-02 00:47:12.766604 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-02 00:47:12.766611 | orchestrator | Tuesday 02 September 2025 00:46:51 +0000 (0:00:05.165) 0:03:27.425 ***** 2025-09-02 00:47:12.766618 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:12.766626 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:12.766633 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:12.766640 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:12.766647 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:12.766654 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:12.766662 | orchestrator | 2025-09-02 00:47:12.766669 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-02 00:47:12.766676 | orchestrator | Tuesday 02 September 2025 00:46:53 +0000 (0:00:01.088) 0:03:28.514 ***** 2025-09-02 00:47:12.766688 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-02 00:47:12.766695 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-02 00:47:12.766703 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-02 00:47:12.766710 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-02 00:47:12.766717 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-02 00:47:12.766724 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-02 00:47:12.766732 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-02 00:47:12.766739 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-02 00:47:12.766746 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-02 00:47:12.766753 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-02 00:47:12.766761 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-02 00:47:12.766768 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-02 00:47:12.766776 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-02 00:47:12.766783 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-02 00:47:12.766790 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-02 00:47:12.766797 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-02 00:47:12.766805 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-02 00:47:12.766812 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-02 00:47:12.766819 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-02 00:47:12.766826 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-02 00:47:12.766834 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-02 00:47:12.766841 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-02 00:47:12.766848 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-02 00:47:12.766855 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-02 00:47:12.766863 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-02 00:47:12.766870 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-02 00:47:12.766882 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-02 00:47:12.766890 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-02 00:47:12.766897 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-02 00:47:12.766904 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-02 00:47:12.766928 | orchestrator | 2025-09-02 00:47:12.766939 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-02 00:47:12.766955 | orchestrator | Tuesday 02 September 2025 00:47:09 +0000 (0:00:16.023) 0:03:44.537 ***** 2025-09-02 00:47:12.766966 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.766978 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.766989 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.767000 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.767007 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.767015 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.767022 | orchestrator | 2025-09-02 00:47:12.767029 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-02 00:47:12.767036 | orchestrator | Tuesday 02 September 2025 00:47:09 +0000 (0:00:00.800) 0:03:45.337 ***** 2025-09-02 00:47:12.767044 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:12.767051 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:12.767058 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:12.767065 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:12.767072 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:12.767080 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:12.767087 | orchestrator | 2025-09-02 00:47:12.767094 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:47:12.767102 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:47:12.767110 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-02 00:47:12.767118 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-02 00:47:12.767130 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-02 00:47:12.767138 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-02 00:47:12.767145 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-02 00:47:12.767152 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-02 00:47:12.767160 | orchestrator | 2025-09-02 00:47:12.767167 | orchestrator | 2025-09-02 00:47:12.767174 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:47:12.767182 | orchestrator | Tuesday 02 September 2025 00:47:10 +0000 (0:00:00.466) 0:03:45.804 ***** 2025-09-02 00:47:12.767189 | orchestrator | =============================================================================== 2025-09-02 00:47:12.767196 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.88s 2025-09-02 00:47:12.767204 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.60s 2025-09-02 00:47:12.767211 | orchestrator | Manage labels ---------------------------------------------------------- 16.02s 2025-09-02 00:47:12.767218 | orchestrator | kubectl : Install required packages ------------------------------------ 15.49s 2025-09-02 00:47:12.767230 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.14s 2025-09-02 00:47:12.767238 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.12s 2025-09-02 00:47:12.767245 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.77s 2025-09-02 00:47:12.767252 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.17s 2025-09-02 00:47:12.767260 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.32s 2025-09-02 00:47:12.767267 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.92s 2025-09-02 00:47:12.767275 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.89s 2025-09-02 00:47:12.767282 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.78s 2025-09-02 00:47:12.767290 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.62s 2025-09-02 00:47:12.767297 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.32s 2025-09-02 00:47:12.767304 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.31s 2025-09-02 00:47:12.767311 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.89s 2025-09-02 00:47:12.767319 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.73s 2025-09-02 00:47:12.767326 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.71s 2025-09-02 00:47:12.767333 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.71s 2025-09-02 00:47:12.767340 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.69s 2025-09-02 00:47:12.767348 | orchestrator | 2025-09-02 00:47:12 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:12.767355 | orchestrator | 2025-09-02 00:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:15.868344 | orchestrator | 2025-09-02 00:47:15 | INFO  | Task e3911985-43f1-4b75-9afc-bcdd437e5817 is in state STARTED 2025-09-02 00:47:15.868463 | orchestrator | 2025-09-02 00:47:15 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:15.868479 | orchestrator | 2025-09-02 00:47:15 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:15.868490 | orchestrator | 2025-09-02 00:47:15 | INFO  | Task 98460295-bb67-4953-8ada-e712987b048f is in state STARTED 2025-09-02 00:47:15.868501 | orchestrator | 2025-09-02 00:47:15 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state STARTED 2025-09-02 00:47:15.868512 | orchestrator | 2025-09-02 00:47:15 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:15.868522 | orchestrator | 2025-09-02 00:47:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:19.130229 | orchestrator | 2025-09-02 00:47:19 | INFO  | Task e3911985-43f1-4b75-9afc-bcdd437e5817 is in state SUCCESS 2025-09-02 00:47:19.130593 | orchestrator | 2025-09-02 00:47:19 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:19.131747 | orchestrator | 2025-09-02 00:47:19 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:19.132910 | orchestrator | 2025-09-02 00:47:19 | INFO  | Task 98460295-bb67-4953-8ada-e712987b048f is in state STARTED 2025-09-02 00:47:19.135302 | orchestrator | 2025-09-02 00:47:19 | INFO  | Task 79075510-4ab3-4899-ba40-0282330f39fc is in state SUCCESS 2025-09-02 00:47:19.137892 | orchestrator | 2025-09-02 00:47:19.138102 | orchestrator | 2025-09-02 00:47:19.138125 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-02 00:47:19.138165 | orchestrator | 2025-09-02 00:47:19.138178 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-02 00:47:19.138190 | orchestrator | Tuesday 02 September 2025 00:47:15 +0000 (0:00:00.252) 0:00:00.252 ***** 2025-09-02 00:47:19.138202 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-02 00:47:19.138213 | orchestrator | 2025-09-02 00:47:19.138224 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-02 00:47:19.138235 | orchestrator | Tuesday 02 September 2025 00:47:16 +0000 (0:00:00.813) 0:00:01.066 ***** 2025-09-02 00:47:19.138248 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:19.138260 | orchestrator | 2025-09-02 00:47:19.138272 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-02 00:47:19.138283 | orchestrator | Tuesday 02 September 2025 00:47:18 +0000 (0:00:01.396) 0:00:02.462 ***** 2025-09-02 00:47:19.138294 | orchestrator | changed: [testbed-manager] 2025-09-02 00:47:19.138305 | orchestrator | 2025-09-02 00:47:19.138316 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:47:19.138328 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:47:19.138341 | orchestrator | 2025-09-02 00:47:19.138352 | orchestrator | 2025-09-02 00:47:19.138365 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:47:19.138379 | orchestrator | Tuesday 02 September 2025 00:47:18 +0000 (0:00:00.552) 0:00:03.015 ***** 2025-09-02 00:47:19.138392 | orchestrator | =============================================================================== 2025-09-02 00:47:19.138405 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.40s 2025-09-02 00:47:19.138417 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2025-09-02 00:47:19.138430 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.55s 2025-09-02 00:47:19.138442 | orchestrator | 2025-09-02 00:47:19.138455 | orchestrator | 2025-09-02 00:47:19.138467 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:47:19.138480 | orchestrator | 2025-09-02 00:47:19.138492 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:47:19.138505 | orchestrator | Tuesday 02 September 2025 00:46:03 +0000 (0:00:00.412) 0:00:00.412 ***** 2025-09-02 00:47:19.138517 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:19.138531 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:19.138544 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:19.138556 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:19.138569 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:19.138581 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:19.138594 | orchestrator | 2025-09-02 00:47:19.138606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:47:19.138619 | orchestrator | Tuesday 02 September 2025 00:46:05 +0000 (0:00:01.197) 0:00:01.609 ***** 2025-09-02 00:47:19.138632 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-02 00:47:19.138645 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-02 00:47:19.138658 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-02 00:47:19.138670 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-02 00:47:19.138683 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-02 00:47:19.138696 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-02 00:47:19.138710 | orchestrator | 2025-09-02 00:47:19.138720 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-02 00:47:19.138731 | orchestrator | 2025-09-02 00:47:19.138752 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-02 00:47:19.138763 | orchestrator | Tuesday 02 September 2025 00:46:05 +0000 (0:00:00.876) 0:00:02.485 ***** 2025-09-02 00:47:19.138783 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:47:19.138797 | orchestrator | 2025-09-02 00:47:19.138808 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-02 00:47:19.138819 | orchestrator | Tuesday 02 September 2025 00:46:07 +0000 (0:00:01.372) 0:00:03.858 ***** 2025-09-02 00:47:19.138830 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-02 00:47:19.138842 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-02 00:47:19.138853 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-02 00:47:19.138864 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-02 00:47:19.138874 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-02 00:47:19.138885 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-02 00:47:19.138897 | orchestrator | 2025-09-02 00:47:19.138908 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-02 00:47:19.138919 | orchestrator | Tuesday 02 September 2025 00:46:09 +0000 (0:00:01.795) 0:00:05.654 ***** 2025-09-02 00:47:19.138947 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-02 00:47:19.138958 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-02 00:47:19.138969 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-02 00:47:19.138979 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-02 00:47:19.138990 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-02 00:47:19.139001 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-02 00:47:19.139012 | orchestrator | 2025-09-02 00:47:19.139023 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-02 00:47:19.139057 | orchestrator | Tuesday 02 September 2025 00:46:10 +0000 (0:00:01.760) 0:00:07.414 ***** 2025-09-02 00:47:19.139069 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-02 00:47:19.139080 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:19.139091 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-02 00:47:19.139102 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:19.139112 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-02 00:47:19.139123 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:19.139134 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-02 00:47:19.139144 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:19.139155 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-02 00:47:19.139166 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:19.139177 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-02 00:47:19.139187 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:19.139198 | orchestrator | 2025-09-02 00:47:19.139209 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-02 00:47:19.139220 | orchestrator | Tuesday 02 September 2025 00:46:13 +0000 (0:00:02.422) 0:00:09.836 ***** 2025-09-02 00:47:19.139231 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:19.139242 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:19.139252 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:19.139263 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:19.139273 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:19.139284 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:19.139295 | orchestrator | 2025-09-02 00:47:19.139305 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-02 00:47:19.139316 | orchestrator | Tuesday 02 September 2025 00:46:14 +0000 (0:00:01.596) 0:00:11.433 ***** 2025-09-02 00:47:19.139331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.139360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.139379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.139391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.139411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.139423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.139442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.139454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.139466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140225 | orchestrator | 2025-09-02 00:47:19.140236 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-02 00:47:19.140248 | orchestrator | Tuesday 02 September 2025 00:46:16 +0000 (0:00:02.021) 0:00:13.455 ***** 2025-09-02 00:47:19.140259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140271 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140455 | orchestrator | 2025-09-02 00:47:19.140466 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-02 00:47:19.140478 | orchestrator | Tuesday 02 September 2025 00:46:20 +0000 (0:00:03.649) 0:00:17.105 ***** 2025-09-02 00:47:19.140489 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:19.140500 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:19.140511 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:19.140522 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:47:19.140533 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:47:19.140544 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:47:19.140555 | orchestrator | 2025-09-02 00:47:19.140566 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-02 00:47:19.140577 | orchestrator | Tuesday 02 September 2025 00:46:24 +0000 (0:00:03.607) 0:00:20.712 ***** 2025-09-02 00:47:19.140588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-02 00:47:19.140771 | orchestrator | 2025-09-02 00:47:19.140782 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-02 00:47:19.140793 | orchestrator | Tuesday 02 September 2025 00:46:27 +0000 (0:00:02.956) 0:00:23.675 ***** 2025-09-02 00:47:19.140804 | orchestrator | 2025-09-02 00:47:19.140815 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-02 00:47:19.140826 | orchestrator | Tuesday 02 September 2025 00:46:27 +0000 (0:00:00.403) 0:00:24.078 ***** 2025-09-02 00:47:19.140837 | orchestrator | 2025-09-02 00:47:19.140848 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-02 00:47:19.140859 | orchestrator | Tuesday 02 September 2025 00:46:27 +0000 (0:00:00.320) 0:00:24.398 ***** 2025-09-02 00:47:19.140870 | orchestrator | 2025-09-02 00:47:19.140881 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-02 00:47:19.140892 | orchestrator | Tuesday 02 September 2025 00:46:28 +0000 (0:00:00.416) 0:00:24.814 ***** 2025-09-02 00:47:19.140903 | orchestrator | 2025-09-02 00:47:19.140914 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-02 00:47:19.140940 | orchestrator | Tuesday 02 September 2025 00:46:28 +0000 (0:00:00.468) 0:00:25.282 ***** 2025-09-02 00:47:19.140951 | orchestrator | 2025-09-02 00:47:19.140963 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-02 00:47:19.140981 | orchestrator | Tuesday 02 September 2025 00:46:29 +0000 (0:00:00.417) 0:00:25.700 ***** 2025-09-02 00:47:19.140992 | orchestrator | 2025-09-02 00:47:19.141003 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-02 00:47:19.141014 | orchestrator | Tuesday 02 September 2025 00:46:29 +0000 (0:00:00.393) 0:00:26.094 ***** 2025-09-02 00:47:19.141025 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:19.141036 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:19.141047 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:19.141058 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:19.141069 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:19.141080 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:19.141090 | orchestrator | 2025-09-02 00:47:19.141102 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-02 00:47:19.141119 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:11.874) 0:00:37.968 ***** 2025-09-02 00:47:19.141131 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:47:19.141142 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:47:19.141153 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:47:19.141164 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:47:19.141175 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:47:19.141186 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:47:19.141197 | orchestrator | 2025-09-02 00:47:19.141208 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-02 00:47:19.141224 | orchestrator | Tuesday 02 September 2025 00:46:43 +0000 (0:00:01.987) 0:00:39.956 ***** 2025-09-02 00:47:19.141235 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:19.141246 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:19.141257 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:19.141268 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:19.141279 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:19.141290 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:19.141301 | orchestrator | 2025-09-02 00:47:19.141312 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-02 00:47:19.141323 | orchestrator | Tuesday 02 September 2025 00:46:54 +0000 (0:00:10.814) 0:00:50.770 ***** 2025-09-02 00:47:19.141334 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-02 00:47:19.141345 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-02 00:47:19.141357 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-02 00:47:19.141368 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-02 00:47:19.141379 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-02 00:47:19.141390 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-02 00:47:19.141400 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-02 00:47:19.141411 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-02 00:47:19.141422 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-02 00:47:19.141433 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-02 00:47:19.141444 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-02 00:47:19.141455 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-02 00:47:19.141466 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-02 00:47:19.141486 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-02 00:47:19.141497 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-02 00:47:19.141508 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-02 00:47:19.141519 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-02 00:47:19.141530 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-02 00:47:19.141540 | orchestrator | 2025-09-02 00:47:19.141551 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-02 00:47:19.141562 | orchestrator | Tuesday 02 September 2025 00:47:02 +0000 (0:00:08.133) 0:00:58.903 ***** 2025-09-02 00:47:19.141574 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-02 00:47:19.141585 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:19.141596 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-02 00:47:19.141607 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:19.141617 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-02 00:47:19.141628 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:19.141639 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-02 00:47:19.141650 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-02 00:47:19.141661 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-02 00:47:19.141672 | orchestrator | 2025-09-02 00:47:19.141683 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-02 00:47:19.141694 | orchestrator | Tuesday 02 September 2025 00:47:06 +0000 (0:00:04.259) 0:01:03.163 ***** 2025-09-02 00:47:19.141705 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-02 00:47:19.141716 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:47:19.141727 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-02 00:47:19.141738 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:47:19.141749 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-02 00:47:19.141760 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:47:19.141777 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-02 00:47:19.141788 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-02 00:47:19.141799 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-02 00:47:19.141810 | orchestrator | 2025-09-02 00:47:19.141821 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-02 00:47:19.141832 | orchestrator | Tuesday 02 September 2025 00:47:10 +0000 (0:00:03.633) 0:01:06.797 ***** 2025-09-02 00:47:19.141843 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:47:19.141859 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:47:19.141870 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:47:19.141881 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:47:19.141892 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:47:19.141902 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:47:19.141913 | orchestrator | 2025-09-02 00:47:19.141952 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:47:19.141964 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-02 00:47:19.141976 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-02 00:47:19.141988 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-02 00:47:19.142006 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-02 00:47:19.142062 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-02 00:47:19.142074 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-02 00:47:19.142085 | orchestrator | 2025-09-02 00:47:19.142096 | orchestrator | 2025-09-02 00:47:19.142107 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:47:19.142118 | orchestrator | Tuesday 02 September 2025 00:47:18 +0000 (0:00:08.017) 0:01:14.814 ***** 2025-09-02 00:47:19.142129 | orchestrator | =============================================================================== 2025-09-02 00:47:19.142140 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.83s 2025-09-02 00:47:19.142151 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.87s 2025-09-02 00:47:19.142162 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.13s 2025-09-02 00:47:19.142172 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 4.26s 2025-09-02 00:47:19.142183 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.65s 2025-09-02 00:47:19.142194 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.63s 2025-09-02 00:47:19.142205 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 3.61s 2025-09-02 00:47:19.142216 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.96s 2025-09-02 00:47:19.142227 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.42s 2025-09-02 00:47:19.142237 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.42s 2025-09-02 00:47:19.142248 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.02s 2025-09-02 00:47:19.142259 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.99s 2025-09-02 00:47:19.142270 | orchestrator | module-load : Load modules ---------------------------------------------- 1.80s 2025-09-02 00:47:19.142280 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.76s 2025-09-02 00:47:19.142291 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.60s 2025-09-02 00:47:19.142302 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.37s 2025-09-02 00:47:19.142313 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.20s 2025-09-02 00:47:19.142323 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-09-02 00:47:19.142334 | orchestrator | 2025-09-02 00:47:19 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:19.142345 | orchestrator | 2025-09-02 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:22.259183 | orchestrator | 2025-09-02 00:47:22 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:22.259336 | orchestrator | 2025-09-02 00:47:22 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:22.260151 | orchestrator | 2025-09-02 00:47:22 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:22.261210 | orchestrator | 2025-09-02 00:47:22 | INFO  | Task 98460295-bb67-4953-8ada-e712987b048f is in state STARTED 2025-09-02 00:47:22.262549 | orchestrator | 2025-09-02 00:47:22 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:22.262570 | orchestrator | 2025-09-02 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:25.303830 | orchestrator | 2025-09-02 00:47:25 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:25.306392 | orchestrator | 2025-09-02 00:47:25 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:25.308880 | orchestrator | 2025-09-02 00:47:25 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:25.311300 | orchestrator | 2025-09-02 00:47:25 | INFO  | Task 98460295-bb67-4953-8ada-e712987b048f is in state SUCCESS 2025-09-02 00:47:25.313801 | orchestrator | 2025-09-02 00:47:25 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:25.313978 | orchestrator | 2025-09-02 00:47:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:28.365590 | orchestrator | 2025-09-02 00:47:28 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:28.365851 | orchestrator | 2025-09-02 00:47:28 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:28.367725 | orchestrator | 2025-09-02 00:47:28 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:28.368423 | orchestrator | 2025-09-02 00:47:28 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:28.368787 | orchestrator | 2025-09-02 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:31.411617 | orchestrator | 2025-09-02 00:47:31 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:31.412281 | orchestrator | 2025-09-02 00:47:31 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:31.413520 | orchestrator | 2025-09-02 00:47:31 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:31.414502 | orchestrator | 2025-09-02 00:47:31 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:31.414528 | orchestrator | 2025-09-02 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:34.453759 | orchestrator | 2025-09-02 00:47:34 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:34.454616 | orchestrator | 2025-09-02 00:47:34 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:34.455757 | orchestrator | 2025-09-02 00:47:34 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:34.456620 | orchestrator | 2025-09-02 00:47:34 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:34.456638 | orchestrator | 2025-09-02 00:47:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:37.510725 | orchestrator | 2025-09-02 00:47:37 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:37.511440 | orchestrator | 2025-09-02 00:47:37 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:37.512217 | orchestrator | 2025-09-02 00:47:37 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:37.513551 | orchestrator | 2025-09-02 00:47:37 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:37.513591 | orchestrator | 2025-09-02 00:47:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:40.552812 | orchestrator | 2025-09-02 00:47:40 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:40.552936 | orchestrator | 2025-09-02 00:47:40 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:40.553835 | orchestrator | 2025-09-02 00:47:40 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:40.555227 | orchestrator | 2025-09-02 00:47:40 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:40.555251 | orchestrator | 2025-09-02 00:47:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:43.658782 | orchestrator | 2025-09-02 00:47:43 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:43.663169 | orchestrator | 2025-09-02 00:47:43 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:43.664511 | orchestrator | 2025-09-02 00:47:43 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:43.666404 | orchestrator | 2025-09-02 00:47:43 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:43.666869 | orchestrator | 2025-09-02 00:47:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:46.713651 | orchestrator | 2025-09-02 00:47:46 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:46.714437 | orchestrator | 2025-09-02 00:47:46 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:46.715359 | orchestrator | 2025-09-02 00:47:46 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:46.716410 | orchestrator | 2025-09-02 00:47:46 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:46.716579 | orchestrator | 2025-09-02 00:47:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:49.767642 | orchestrator | 2025-09-02 00:47:49 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:49.770130 | orchestrator | 2025-09-02 00:47:49 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:49.771843 | orchestrator | 2025-09-02 00:47:49 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:49.772283 | orchestrator | 2025-09-02 00:47:49 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:49.772307 | orchestrator | 2025-09-02 00:47:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:52.820227 | orchestrator | 2025-09-02 00:47:52 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:52.820689 | orchestrator | 2025-09-02 00:47:52 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:52.821376 | orchestrator | 2025-09-02 00:47:52 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:52.821915 | orchestrator | 2025-09-02 00:47:52 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:52.822104 | orchestrator | 2025-09-02 00:47:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:55.858662 | orchestrator | 2025-09-02 00:47:55 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:55.860510 | orchestrator | 2025-09-02 00:47:55 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:55.863193 | orchestrator | 2025-09-02 00:47:55 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:55.864263 | orchestrator | 2025-09-02 00:47:55 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:55.864297 | orchestrator | 2025-09-02 00:47:55 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:47:58.907390 | orchestrator | 2025-09-02 00:47:58 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:47:58.907521 | orchestrator | 2025-09-02 00:47:58 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:47:58.907686 | orchestrator | 2025-09-02 00:47:58 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:47:58.908812 | orchestrator | 2025-09-02 00:47:58 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:47:58.908839 | orchestrator | 2025-09-02 00:47:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:01.950704 | orchestrator | 2025-09-02 00:48:01 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:01.951159 | orchestrator | 2025-09-02 00:48:01 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:01.952137 | orchestrator | 2025-09-02 00:48:01 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:01.953164 | orchestrator | 2025-09-02 00:48:01 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:01.953187 | orchestrator | 2025-09-02 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:04.998373 | orchestrator | 2025-09-02 00:48:04 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:05.002204 | orchestrator | 2025-09-02 00:48:05 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:05.008776 | orchestrator | 2025-09-02 00:48:05 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:05.012936 | orchestrator | 2025-09-02 00:48:05 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:05.012964 | orchestrator | 2025-09-02 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:08.071394 | orchestrator | 2025-09-02 00:48:08 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:08.072389 | orchestrator | 2025-09-02 00:48:08 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:08.074291 | orchestrator | 2025-09-02 00:48:08 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:08.075781 | orchestrator | 2025-09-02 00:48:08 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:08.076140 | orchestrator | 2025-09-02 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:11.123339 | orchestrator | 2025-09-02 00:48:11 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:11.126374 | orchestrator | 2025-09-02 00:48:11 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:11.128480 | orchestrator | 2025-09-02 00:48:11 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:11.130748 | orchestrator | 2025-09-02 00:48:11 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:11.130926 | orchestrator | 2025-09-02 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:14.168709 | orchestrator | 2025-09-02 00:48:14 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:14.169212 | orchestrator | 2025-09-02 00:48:14 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:14.170353 | orchestrator | 2025-09-02 00:48:14 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:14.171591 | orchestrator | 2025-09-02 00:48:14 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:14.171653 | orchestrator | 2025-09-02 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:17.206591 | orchestrator | 2025-09-02 00:48:17 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:17.208364 | orchestrator | 2025-09-02 00:48:17 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:17.209921 | orchestrator | 2025-09-02 00:48:17 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:17.211755 | orchestrator | 2025-09-02 00:48:17 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:17.212088 | orchestrator | 2025-09-02 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:20.255662 | orchestrator | 2025-09-02 00:48:20 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:20.256970 | orchestrator | 2025-09-02 00:48:20 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:20.258141 | orchestrator | 2025-09-02 00:48:20 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:20.259276 | orchestrator | 2025-09-02 00:48:20 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:20.260713 | orchestrator | 2025-09-02 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:23.308467 | orchestrator | 2025-09-02 00:48:23 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:23.308690 | orchestrator | 2025-09-02 00:48:23 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:23.313570 | orchestrator | 2025-09-02 00:48:23 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:23.316602 | orchestrator | 2025-09-02 00:48:23 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:23.316890 | orchestrator | 2025-09-02 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:26.370706 | orchestrator | 2025-09-02 00:48:26 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:26.373911 | orchestrator | 2025-09-02 00:48:26 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:26.375765 | orchestrator | 2025-09-02 00:48:26 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:26.378140 | orchestrator | 2025-09-02 00:48:26 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:26.378250 | orchestrator | 2025-09-02 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:29.432575 | orchestrator | 2025-09-02 00:48:29 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:29.433897 | orchestrator | 2025-09-02 00:48:29 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:29.436011 | orchestrator | 2025-09-02 00:48:29 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:29.437888 | orchestrator | 2025-09-02 00:48:29 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:29.438457 | orchestrator | 2025-09-02 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:32.483804 | orchestrator | 2025-09-02 00:48:32 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:32.486204 | orchestrator | 2025-09-02 00:48:32 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:32.487421 | orchestrator | 2025-09-02 00:48:32 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:32.488781 | orchestrator | 2025-09-02 00:48:32 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:32.489132 | orchestrator | 2025-09-02 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:35.536306 | orchestrator | 2025-09-02 00:48:35 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:35.537610 | orchestrator | 2025-09-02 00:48:35 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:35.539566 | orchestrator | 2025-09-02 00:48:35 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:35.543137 | orchestrator | 2025-09-02 00:48:35 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:35.543612 | orchestrator | 2025-09-02 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:38.579500 | orchestrator | 2025-09-02 00:48:38 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:38.579746 | orchestrator | 2025-09-02 00:48:38 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:38.580715 | orchestrator | 2025-09-02 00:48:38 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:38.581708 | orchestrator | 2025-09-02 00:48:38 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:38.581731 | orchestrator | 2025-09-02 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:41.624886 | orchestrator | 2025-09-02 00:48:41 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:41.625694 | orchestrator | 2025-09-02 00:48:41 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:41.627626 | orchestrator | 2025-09-02 00:48:41 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:41.629417 | orchestrator | 2025-09-02 00:48:41 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:41.629492 | orchestrator | 2025-09-02 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:44.677559 | orchestrator | 2025-09-02 00:48:44 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:44.679577 | orchestrator | 2025-09-02 00:48:44 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:44.681861 | orchestrator | 2025-09-02 00:48:44 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:44.684027 | orchestrator | 2025-09-02 00:48:44 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:44.684103 | orchestrator | 2025-09-02 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:47.729716 | orchestrator | 2025-09-02 00:48:47 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:47.729973 | orchestrator | 2025-09-02 00:48:47 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:47.730987 | orchestrator | 2025-09-02 00:48:47 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:47.731874 | orchestrator | 2025-09-02 00:48:47 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:47.731992 | orchestrator | 2025-09-02 00:48:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:50.767937 | orchestrator | 2025-09-02 00:48:50 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:50.769017 | orchestrator | 2025-09-02 00:48:50 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:50.771857 | orchestrator | 2025-09-02 00:48:50 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:50.774581 | orchestrator | 2025-09-02 00:48:50 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:50.775006 | orchestrator | 2025-09-02 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:53.811002 | orchestrator | 2025-09-02 00:48:53 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:53.811609 | orchestrator | 2025-09-02 00:48:53 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:53.813598 | orchestrator | 2025-09-02 00:48:53 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:53.815906 | orchestrator | 2025-09-02 00:48:53 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:53.815950 | orchestrator | 2025-09-02 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:56.864195 | orchestrator | 2025-09-02 00:48:56 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:56.865994 | orchestrator | 2025-09-02 00:48:56 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:56.868430 | orchestrator | 2025-09-02 00:48:56 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:56.869904 | orchestrator | 2025-09-02 00:48:56 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state STARTED 2025-09-02 00:48:56.870213 | orchestrator | 2025-09-02 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:59.928285 | orchestrator | 2025-09-02 00:48:59 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:48:59.930327 | orchestrator | 2025-09-02 00:48:59 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:48:59.932289 | orchestrator | 2025-09-02 00:48:59 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:48:59.934679 | orchestrator | 2025-09-02 00:48:59 | INFO  | Task 491a8254-9cc0-4ddc-86f7-bd41cf95c023 is in state SUCCESS 2025-09-02 00:48:59.934706 | orchestrator | 2025-09-02 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:48:59.936005 | orchestrator | 2025-09-02 00:48:59.936035 | orchestrator | 2025-09-02 00:48:59.936048 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-02 00:48:59.936060 | orchestrator | 2025-09-02 00:48:59.936099 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-02 00:48:59.936111 | orchestrator | Tuesday 02 September 2025 00:47:15 +0000 (0:00:00.263) 0:00:00.263 ***** 2025-09-02 00:48:59.936123 | orchestrator | ok: [testbed-manager] 2025-09-02 00:48:59.936136 | orchestrator | 2025-09-02 00:48:59.936147 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-02 00:48:59.936158 | orchestrator | Tuesday 02 September 2025 00:47:16 +0000 (0:00:00.624) 0:00:00.888 ***** 2025-09-02 00:48:59.936169 | orchestrator | ok: [testbed-manager] 2025-09-02 00:48:59.936180 | orchestrator | 2025-09-02 00:48:59.936191 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-02 00:48:59.936203 | orchestrator | Tuesday 02 September 2025 00:47:17 +0000 (0:00:00.672) 0:00:01.560 ***** 2025-09-02 00:48:59.936214 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-02 00:48:59.936226 | orchestrator | 2025-09-02 00:48:59.936236 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-02 00:48:59.936274 | orchestrator | Tuesday 02 September 2025 00:47:17 +0000 (0:00:00.726) 0:00:02.286 ***** 2025-09-02 00:48:59.936286 | orchestrator | changed: [testbed-manager] 2025-09-02 00:48:59.936297 | orchestrator | 2025-09-02 00:48:59.936308 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-02 00:48:59.936319 | orchestrator | Tuesday 02 September 2025 00:47:19 +0000 (0:00:01.801) 0:00:04.088 ***** 2025-09-02 00:48:59.936330 | orchestrator | changed: [testbed-manager] 2025-09-02 00:48:59.936340 | orchestrator | 2025-09-02 00:48:59.936351 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-02 00:48:59.936362 | orchestrator | Tuesday 02 September 2025 00:47:20 +0000 (0:00:01.011) 0:00:05.100 ***** 2025-09-02 00:48:59.936373 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-02 00:48:59.936384 | orchestrator | 2025-09-02 00:48:59.936395 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-02 00:48:59.936406 | orchestrator | Tuesday 02 September 2025 00:47:22 +0000 (0:00:01.677) 0:00:06.778 ***** 2025-09-02 00:48:59.936417 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-02 00:48:59.936427 | orchestrator | 2025-09-02 00:48:59.936438 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-02 00:48:59.936449 | orchestrator | Tuesday 02 September 2025 00:47:23 +0000 (0:00:00.869) 0:00:07.647 ***** 2025-09-02 00:48:59.936460 | orchestrator | ok: [testbed-manager] 2025-09-02 00:48:59.936472 | orchestrator | 2025-09-02 00:48:59.936483 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-02 00:48:59.936493 | orchestrator | Tuesday 02 September 2025 00:47:23 +0000 (0:00:00.438) 0:00:08.086 ***** 2025-09-02 00:48:59.936504 | orchestrator | ok: [testbed-manager] 2025-09-02 00:48:59.936515 | orchestrator | 2025-09-02 00:48:59.936526 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:48:59.936537 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:48:59.936550 | orchestrator | 2025-09-02 00:48:59.936561 | orchestrator | 2025-09-02 00:48:59.936572 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:48:59.936583 | orchestrator | Tuesday 02 September 2025 00:47:23 +0000 (0:00:00.337) 0:00:08.424 ***** 2025-09-02 00:48:59.936594 | orchestrator | =============================================================================== 2025-09-02 00:48:59.936605 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.80s 2025-09-02 00:48:59.936619 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.68s 2025-09-02 00:48:59.936631 | orchestrator | Change server address in the kubeconfig --------------------------------- 1.01s 2025-09-02 00:48:59.936644 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.87s 2025-09-02 00:48:59.936656 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2025-09-02 00:48:59.936668 | orchestrator | Create .kube directory -------------------------------------------------- 0.67s 2025-09-02 00:48:59.936681 | orchestrator | Get home directory of operator user ------------------------------------- 0.62s 2025-09-02 00:48:59.936693 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2025-09-02 00:48:59.936706 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2025-09-02 00:48:59.936719 | orchestrator | 2025-09-02 00:48:59.936731 | orchestrator | 2025-09-02 00:48:59.936744 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-02 00:48:59.936755 | orchestrator | 2025-09-02 00:48:59.936768 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-02 00:48:59.936780 | orchestrator | Tuesday 02 September 2025 00:46:29 +0000 (0:00:00.163) 0:00:00.163 ***** 2025-09-02 00:48:59.936792 | orchestrator | ok: [localhost] => { 2025-09-02 00:48:59.936806 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-02 00:48:59.936826 | orchestrator | } 2025-09-02 00:48:59.936840 | orchestrator | 2025-09-02 00:48:59.936853 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-02 00:48:59.936866 | orchestrator | Tuesday 02 September 2025 00:46:29 +0000 (0:00:00.042) 0:00:00.206 ***** 2025-09-02 00:48:59.936880 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-02 00:48:59.936895 | orchestrator | ...ignoring 2025-09-02 00:48:59.936908 | orchestrator | 2025-09-02 00:48:59.936921 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-02 00:48:59.936934 | orchestrator | Tuesday 02 September 2025 00:46:33 +0000 (0:00:03.580) 0:00:03.786 ***** 2025-09-02 00:48:59.936947 | orchestrator | skipping: [localhost] 2025-09-02 00:48:59.936960 | orchestrator | 2025-09-02 00:48:59.936984 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-02 00:48:59.936996 | orchestrator | Tuesday 02 September 2025 00:46:33 +0000 (0:00:00.044) 0:00:03.831 ***** 2025-09-02 00:48:59.937007 | orchestrator | ok: [localhost] 2025-09-02 00:48:59.937018 | orchestrator | 2025-09-02 00:48:59.937029 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:48:59.937040 | orchestrator | 2025-09-02 00:48:59.937165 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:48:59.937189 | orchestrator | Tuesday 02 September 2025 00:46:33 +0000 (0:00:00.153) 0:00:03.985 ***** 2025-09-02 00:48:59.937201 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:48:59.937212 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:48:59.937222 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:48:59.937233 | orchestrator | 2025-09-02 00:48:59.937244 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:48:59.937255 | orchestrator | Tuesday 02 September 2025 00:46:33 +0000 (0:00:00.470) 0:00:04.455 ***** 2025-09-02 00:48:59.937266 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-02 00:48:59.937277 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-02 00:48:59.937288 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-02 00:48:59.937299 | orchestrator | 2025-09-02 00:48:59.937310 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-02 00:48:59.937321 | orchestrator | 2025-09-02 00:48:59.937332 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-02 00:48:59.937343 | orchestrator | Tuesday 02 September 2025 00:46:34 +0000 (0:00:00.829) 0:00:05.285 ***** 2025-09-02 00:48:59.937354 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:48:59.937365 | orchestrator | 2025-09-02 00:48:59.937375 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-02 00:48:59.937386 | orchestrator | Tuesday 02 September 2025 00:46:35 +0000 (0:00:00.768) 0:00:06.053 ***** 2025-09-02 00:48:59.937397 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:48:59.937408 | orchestrator | 2025-09-02 00:48:59.937418 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-02 00:48:59.937429 | orchestrator | Tuesday 02 September 2025 00:46:36 +0000 (0:00:01.189) 0:00:07.242 ***** 2025-09-02 00:48:59.937440 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:48:59.937451 | orchestrator | 2025-09-02 00:48:59.937461 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-02 00:48:59.937472 | orchestrator | Tuesday 02 September 2025 00:46:37 +0000 (0:00:00.459) 0:00:07.702 ***** 2025-09-02 00:48:59.937483 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:48:59.937494 | orchestrator | 2025-09-02 00:48:59.937504 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-02 00:48:59.937515 | orchestrator | Tuesday 02 September 2025 00:46:37 +0000 (0:00:00.468) 0:00:08.170 ***** 2025-09-02 00:48:59.937526 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:48:59.937537 | orchestrator | 2025-09-02 00:48:59.937547 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-02 00:48:59.937566 | orchestrator | Tuesday 02 September 2025 00:46:38 +0000 (0:00:00.384) 0:00:08.555 ***** 2025-09-02 00:48:59.937583 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:48:59.937594 | orchestrator | 2025-09-02 00:48:59.937605 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-02 00:48:59.937616 | orchestrator | Tuesday 02 September 2025 00:46:38 +0000 (0:00:00.398) 0:00:08.953 ***** 2025-09-02 00:48:59.937627 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:48:59.937638 | orchestrator | 2025-09-02 00:48:59.937649 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-02 00:48:59.937660 | orchestrator | Tuesday 02 September 2025 00:46:39 +0000 (0:00:01.186) 0:00:10.140 ***** 2025-09-02 00:48:59.937671 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:48:59.937681 | orchestrator | 2025-09-02 00:48:59.937692 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-02 00:48:59.937703 | orchestrator | Tuesday 02 September 2025 00:46:40 +0000 (0:00:01.137) 0:00:11.278 ***** 2025-09-02 00:48:59.937714 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:48:59.937725 | orchestrator | 2025-09-02 00:48:59.937736 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-02 00:48:59.937747 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:00.369) 0:00:11.648 ***** 2025-09-02 00:48:59.937758 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:48:59.937769 | orchestrator | 2025-09-02 00:48:59.937780 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-02 00:48:59.937791 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:00.601) 0:00:12.249 ***** 2025-09-02 00:48:59.937819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:48:59.937836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:48:59.937855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:48:59.937875 | orchestrator | 2025-09-02 00:48:59.937886 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-02 00:48:59.937898 | orchestrator | Tuesday 02 September 2025 00:46:43 +0000 (0:00:01.732) 0:00:13.982 ***** 2025-09-02 00:48:59.937909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:48:59.937929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:48:59.937942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:48:59.937961 | orchestrator | 2025-09-02 00:48:59.937972 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-02 00:48:59.937983 | orchestrator | Tuesday 02 September 2025 00:46:47 +0000 (0:00:04.454) 0:00:18.436 ***** 2025-09-02 00:48:59.937993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-02 00:48:59.938005 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-02 00:48:59.938134 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-02 00:48:59.938150 | orchestrator | 2025-09-02 00:48:59.938161 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-02 00:48:59.938178 | orchestrator | Tuesday 02 September 2025 00:46:49 +0000 (0:00:01.609) 0:00:20.046 ***** 2025-09-02 00:48:59.938190 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-02 00:48:59.938200 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-02 00:48:59.938211 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-02 00:48:59.938222 | orchestrator | 2025-09-02 00:48:59.938233 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-02 00:48:59.938244 | orchestrator | Tuesday 02 September 2025 00:46:51 +0000 (0:00:02.177) 0:00:22.224 ***** 2025-09-02 00:48:59.938255 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-02 00:48:59.938265 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-02 00:48:59.938276 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-02 00:48:59.938287 | orchestrator | 2025-09-02 00:48:59.938298 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-02 00:48:59.938309 | orchestrator | Tuesday 02 September 2025 00:46:53 +0000 (0:00:01.760) 0:00:23.984 ***** 2025-09-02 00:48:59.938320 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-02 00:48:59.938331 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-02 00:48:59.938342 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-02 00:48:59.938353 | orchestrator | 2025-09-02 00:48:59.938364 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-02 00:48:59.938374 | orchestrator | Tuesday 02 September 2025 00:46:57 +0000 (0:00:03.976) 0:00:27.961 ***** 2025-09-02 00:48:59.938385 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-02 00:48:59.938396 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-02 00:48:59.938407 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-02 00:48:59.938418 | orchestrator | 2025-09-02 00:48:59.938429 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-02 00:48:59.938440 | orchestrator | Tuesday 02 September 2025 00:46:59 +0000 (0:00:02.265) 0:00:30.226 ***** 2025-09-02 00:48:59.938458 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-02 00:48:59.938470 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-02 00:48:59.938489 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-02 00:48:59.938500 | orchestrator | 2025-09-02 00:48:59.938510 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-02 00:48:59.938521 | orchestrator | Tuesday 02 September 2025 00:47:03 +0000 (0:00:03.604) 0:00:33.830 ***** 2025-09-02 00:48:59.938532 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:48:59.938543 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:48:59.938554 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:48:59.938565 | orchestrator | 2025-09-02 00:48:59.938576 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-02 00:48:59.938587 | orchestrator | Tuesday 02 September 2025 00:47:04 +0000 (0:00:01.393) 0:00:35.223 ***** 2025-09-02 00:48:59.938599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:48:59.938617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:48:59.938629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:48:59.938649 | orchestrator | 2025-09-02 00:48:59.938660 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-02 00:48:59.938671 | orchestrator | Tuesday 02 September 2025 00:47:06 +0000 (0:00:01.662) 0:00:36.886 ***** 2025-09-02 00:48:59.938682 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:48:59.938693 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:48:59.938709 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:48:59.938720 | orchestrator | 2025-09-02 00:48:59.938731 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-02 00:48:59.938742 | orchestrator | Tuesday 02 September 2025 00:47:07 +0000 (0:00:00.886) 0:00:37.773 ***** 2025-09-02 00:48:59.938753 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:48:59.938764 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:48:59.938775 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:48:59.938786 | orchestrator | 2025-09-02 00:48:59.938797 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-02 00:48:59.938808 | orchestrator | Tuesday 02 September 2025 00:47:15 +0000 (0:00:07.825) 0:00:45.598 ***** 2025-09-02 00:48:59.938819 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:48:59.938830 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:48:59.938841 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:48:59.938851 | orchestrator | 2025-09-02 00:48:59.938862 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-02 00:48:59.938873 | orchestrator | 2025-09-02 00:48:59.938884 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-02 00:48:59.938901 | orchestrator | Tuesday 02 September 2025 00:47:15 +0000 (0:00:00.755) 0:00:46.353 ***** 2025-09-02 00:48:59.938920 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:48:59.938939 | orchestrator | 2025-09-02 00:48:59.938958 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-02 00:48:59.938976 | orchestrator | Tuesday 02 September 2025 00:47:16 +0000 (0:00:00.668) 0:00:47.022 ***** 2025-09-02 00:48:59.938995 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:48:59.939012 | orchestrator | 2025-09-02 00:48:59.939030 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-02 00:48:59.939049 | orchestrator | Tuesday 02 September 2025 00:47:17 +0000 (0:00:00.624) 0:00:47.646 ***** 2025-09-02 00:48:59.939124 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:48:59.939145 | orchestrator | 2025-09-02 00:48:59.939162 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-02 00:48:59.939174 | orchestrator | Tuesday 02 September 2025 00:47:23 +0000 (0:00:06.753) 0:00:54.400 ***** 2025-09-02 00:48:59.939185 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:48:59.939195 | orchestrator | 2025-09-02 00:48:59.939206 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-02 00:48:59.939217 | orchestrator | 2025-09-02 00:48:59.939227 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-02 00:48:59.939238 | orchestrator | Tuesday 02 September 2025 00:48:14 +0000 (0:00:50.260) 0:01:44.661 ***** 2025-09-02 00:48:59.939249 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:48:59.939260 | orchestrator | 2025-09-02 00:48:59.939271 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-02 00:48:59.939282 | orchestrator | Tuesday 02 September 2025 00:48:14 +0000 (0:00:00.637) 0:01:45.298 ***** 2025-09-02 00:48:59.939293 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:48:59.939303 | orchestrator | 2025-09-02 00:48:59.939314 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-02 00:48:59.939324 | orchestrator | Tuesday 02 September 2025 00:48:15 +0000 (0:00:00.247) 0:01:45.546 ***** 2025-09-02 00:48:59.939333 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:48:59.939343 | orchestrator | 2025-09-02 00:48:59.939359 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-02 00:48:59.939369 | orchestrator | Tuesday 02 September 2025 00:48:16 +0000 (0:00:01.930) 0:01:47.477 ***** 2025-09-02 00:48:59.939379 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:48:59.939398 | orchestrator | 2025-09-02 00:48:59.939408 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-02 00:48:59.939417 | orchestrator | 2025-09-02 00:48:59.939427 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-02 00:48:59.939436 | orchestrator | Tuesday 02 September 2025 00:48:34 +0000 (0:00:18.060) 0:02:05.537 ***** 2025-09-02 00:48:59.939446 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:48:59.939456 | orchestrator | 2025-09-02 00:48:59.939465 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-02 00:48:59.939475 | orchestrator | Tuesday 02 September 2025 00:48:35 +0000 (0:00:00.601) 0:02:06.138 ***** 2025-09-02 00:48:59.939484 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:48:59.939494 | orchestrator | 2025-09-02 00:48:59.939504 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-02 00:48:59.939513 | orchestrator | Tuesday 02 September 2025 00:48:35 +0000 (0:00:00.285) 0:02:06.424 ***** 2025-09-02 00:48:59.939523 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:48:59.939532 | orchestrator | 2025-09-02 00:48:59.939542 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-02 00:48:59.939551 | orchestrator | Tuesday 02 September 2025 00:48:42 +0000 (0:00:06.697) 0:02:13.121 ***** 2025-09-02 00:48:59.939561 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:48:59.939571 | orchestrator | 2025-09-02 00:48:59.939580 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-02 00:48:59.939590 | orchestrator | 2025-09-02 00:48:59.939600 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-02 00:48:59.939609 | orchestrator | Tuesday 02 September 2025 00:48:55 +0000 (0:00:13.114) 0:02:26.236 ***** 2025-09-02 00:48:59.939619 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:48:59.939629 | orchestrator | 2025-09-02 00:48:59.939638 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-02 00:48:59.939648 | orchestrator | Tuesday 02 September 2025 00:48:56 +0000 (0:00:00.495) 0:02:26.731 ***** 2025-09-02 00:48:59.939658 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-02 00:48:59.939667 | orchestrator | enable_outward_rabbitmq_True 2025-09-02 00:48:59.939677 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-02 00:48:59.939686 | orchestrator | outward_rabbitmq_restart 2025-09-02 00:48:59.939696 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:48:59.939706 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:48:59.939715 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:48:59.939725 | orchestrator | 2025-09-02 00:48:59.939742 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-02 00:48:59.939752 | orchestrator | skipping: no hosts matched 2025-09-02 00:48:59.939762 | orchestrator | 2025-09-02 00:48:59.939772 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-02 00:48:59.939781 | orchestrator | skipping: no hosts matched 2025-09-02 00:48:59.939791 | orchestrator | 2025-09-02 00:48:59.939801 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-02 00:48:59.939810 | orchestrator | skipping: no hosts matched 2025-09-02 00:48:59.939820 | orchestrator | 2025-09-02 00:48:59.939830 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:48:59.939840 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-02 00:48:59.939851 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-02 00:48:59.939861 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:48:59.939871 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:48:59.939887 | orchestrator | 2025-09-02 00:48:59.939896 | orchestrator | 2025-09-02 00:48:59.939906 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:48:59.939916 | orchestrator | Tuesday 02 September 2025 00:48:58 +0000 (0:00:02.349) 0:02:29.081 ***** 2025-09-02 00:48:59.939925 | orchestrator | =============================================================================== 2025-09-02 00:48:59.939935 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.44s 2025-09-02 00:48:59.939945 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.38s 2025-09-02 00:48:59.939954 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.83s 2025-09-02 00:48:59.939964 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.45s 2025-09-02 00:48:59.939973 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.98s 2025-09-02 00:48:59.939983 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 3.60s 2025-09-02 00:48:59.939992 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.58s 2025-09-02 00:48:59.940002 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.35s 2025-09-02 00:48:59.940012 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.27s 2025-09-02 00:48:59.940021 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.18s 2025-09-02 00:48:59.940031 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.91s 2025-09-02 00:48:59.940045 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.76s 2025-09-02 00:48:59.940055 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.73s 2025-09-02 00:48:59.940064 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.66s 2025-09-02 00:48:59.940093 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.61s 2025-09-02 00:48:59.940103 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.39s 2025-09-02 00:48:59.940113 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.19s 2025-09-02 00:48:59.940122 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.19s 2025-09-02 00:48:59.940132 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.16s 2025-09-02 00:48:59.940142 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.14s 2025-09-02 00:49:02.973740 | orchestrator | 2025-09-02 00:49:02 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:02.975396 | orchestrator | 2025-09-02 00:49:02 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:02.977327 | orchestrator | 2025-09-02 00:49:02 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:02.977636 | orchestrator | 2025-09-02 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:06.035377 | orchestrator | 2025-09-02 00:49:06 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:06.036153 | orchestrator | 2025-09-02 00:49:06 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:06.037826 | orchestrator | 2025-09-02 00:49:06 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:06.040447 | orchestrator | 2025-09-02 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:09.076833 | orchestrator | 2025-09-02 00:49:09 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:09.078265 | orchestrator | 2025-09-02 00:49:09 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:09.079945 | orchestrator | 2025-09-02 00:49:09 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:09.080032 | orchestrator | 2025-09-02 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:12.110986 | orchestrator | 2025-09-02 00:49:12 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:12.111907 | orchestrator | 2025-09-02 00:49:12 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:12.113459 | orchestrator | 2025-09-02 00:49:12 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:12.113482 | orchestrator | 2025-09-02 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:15.157369 | orchestrator | 2025-09-02 00:49:15 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:15.159874 | orchestrator | 2025-09-02 00:49:15 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:15.161382 | orchestrator | 2025-09-02 00:49:15 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:15.161428 | orchestrator | 2025-09-02 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:18.205758 | orchestrator | 2025-09-02 00:49:18 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:18.206902 | orchestrator | 2025-09-02 00:49:18 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:18.207740 | orchestrator | 2025-09-02 00:49:18 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:18.207763 | orchestrator | 2025-09-02 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:21.239742 | orchestrator | 2025-09-02 00:49:21 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:21.240160 | orchestrator | 2025-09-02 00:49:21 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:21.242460 | orchestrator | 2025-09-02 00:49:21 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:21.242487 | orchestrator | 2025-09-02 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:24.273653 | orchestrator | 2025-09-02 00:49:24 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:24.275991 | orchestrator | 2025-09-02 00:49:24 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:24.276867 | orchestrator | 2025-09-02 00:49:24 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:24.276898 | orchestrator | 2025-09-02 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:27.322240 | orchestrator | 2025-09-02 00:49:27 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:27.323867 | orchestrator | 2025-09-02 00:49:27 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:27.326375 | orchestrator | 2025-09-02 00:49:27 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:27.326426 | orchestrator | 2025-09-02 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:30.370721 | orchestrator | 2025-09-02 00:49:30 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:30.371390 | orchestrator | 2025-09-02 00:49:30 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:30.372680 | orchestrator | 2025-09-02 00:49:30 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:30.372733 | orchestrator | 2025-09-02 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:33.406967 | orchestrator | 2025-09-02 00:49:33 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:33.409247 | orchestrator | 2025-09-02 00:49:33 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:33.411574 | orchestrator | 2025-09-02 00:49:33 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:33.411603 | orchestrator | 2025-09-02 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:36.457270 | orchestrator | 2025-09-02 00:49:36 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:36.457829 | orchestrator | 2025-09-02 00:49:36 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:36.458237 | orchestrator | 2025-09-02 00:49:36 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:36.458954 | orchestrator | 2025-09-02 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:39.494190 | orchestrator | 2025-09-02 00:49:39 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:39.496902 | orchestrator | 2025-09-02 00:49:39 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:39.498130 | orchestrator | 2025-09-02 00:49:39 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:39.498153 | orchestrator | 2025-09-02 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:42.526902 | orchestrator | 2025-09-02 00:49:42 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:42.528768 | orchestrator | 2025-09-02 00:49:42 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:42.529288 | orchestrator | 2025-09-02 00:49:42 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:42.529383 | orchestrator | 2025-09-02 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:45.558563 | orchestrator | 2025-09-02 00:49:45 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:45.558830 | orchestrator | 2025-09-02 00:49:45 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:45.559707 | orchestrator | 2025-09-02 00:49:45 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:45.560250 | orchestrator | 2025-09-02 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:48.606103 | orchestrator | 2025-09-02 00:49:48 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state STARTED 2025-09-02 00:49:48.607057 | orchestrator | 2025-09-02 00:49:48 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:48.609662 | orchestrator | 2025-09-02 00:49:48 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:48.611943 | orchestrator | 2025-09-02 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:51.645007 | orchestrator | 2025-09-02 00:49:51 | INFO  | Task d0cc986c-9bfe-42fa-8dbb-e0f71f3c8100 is in state SUCCESS 2025-09-02 00:49:51.646873 | orchestrator | 2025-09-02 00:49:51.646947 | orchestrator | 2025-09-02 00:49:51.646961 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:49:51.647016 | orchestrator | 2025-09-02 00:49:51.647045 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:49:51.647084 | orchestrator | Tuesday 02 September 2025 00:47:24 +0000 (0:00:00.173) 0:00:00.173 ***** 2025-09-02 00:49:51.647097 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:49:51.647110 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:49:51.647182 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:49:51.647194 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.647205 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.647216 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.647227 | orchestrator | 2025-09-02 00:49:51.647238 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:49:51.647323 | orchestrator | Tuesday 02 September 2025 00:47:25 +0000 (0:00:00.743) 0:00:00.917 ***** 2025-09-02 00:49:51.647336 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-02 00:49:51.647347 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-02 00:49:51.647358 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-02 00:49:51.647369 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-02 00:49:51.647379 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-02 00:49:51.647390 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-02 00:49:51.647401 | orchestrator | 2025-09-02 00:49:51.647412 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-02 00:49:51.647423 | orchestrator | 2025-09-02 00:49:51.647433 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-02 00:49:51.647444 | orchestrator | Tuesday 02 September 2025 00:47:26 +0000 (0:00:00.877) 0:00:01.794 ***** 2025-09-02 00:49:51.647457 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:49:51.647469 | orchestrator | 2025-09-02 00:49:51.647483 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-02 00:49:51.647496 | orchestrator | Tuesday 02 September 2025 00:47:27 +0000 (0:00:01.228) 0:00:03.023 ***** 2025-09-02 00:49:51.647511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647527 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647607 | orchestrator | 2025-09-02 00:49:51.647633 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-02 00:49:51.647653 | orchestrator | Tuesday 02 September 2025 00:47:29 +0000 (0:00:01.703) 0:00:04.726 ***** 2025-09-02 00:49:51.647667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647745 | orchestrator | 2025-09-02 00:49:51.647758 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-02 00:49:51.647771 | orchestrator | Tuesday 02 September 2025 00:47:31 +0000 (0:00:02.216) 0:00:06.942 ***** 2025-09-02 00:49:51.647784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647877 | orchestrator | 2025-09-02 00:49:51.647888 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-02 00:49:51.647899 | orchestrator | Tuesday 02 September 2025 00:47:32 +0000 (0:00:01.401) 0:00:08.344 ***** 2025-09-02 00:49:51.647911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647922 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.647985 | orchestrator | 2025-09-02 00:49:51.648050 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-02 00:49:51.648067 | orchestrator | Tuesday 02 September 2025 00:47:35 +0000 (0:00:02.241) 0:00:10.585 ***** 2025-09-02 00:49:51.648079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.648091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.648102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.648114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.648144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.648155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.648174 | orchestrator | 2025-09-02 00:49:51.648186 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-02 00:49:51.648197 | orchestrator | Tuesday 02 September 2025 00:47:36 +0000 (0:00:01.403) 0:00:11.988 ***** 2025-09-02 00:49:51.648208 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:49:51.648219 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:49:51.648230 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:49:51.648241 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:49:51.648252 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:49:51.648290 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:49:51.648302 | orchestrator | 2025-09-02 00:49:51.648313 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-02 00:49:51.648324 | orchestrator | Tuesday 02 September 2025 00:47:39 +0000 (0:00:02.798) 0:00:14.787 ***** 2025-09-02 00:49:51.648335 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-02 00:49:51.648346 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-02 00:49:51.648357 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-02 00:49:51.648368 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-02 00:49:51.648379 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-02 00:49:51.648390 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-02 00:49:51.648400 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-02 00:49:51.648411 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-02 00:49:51.648429 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-02 00:49:51.648440 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-02 00:49:51.648457 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-02 00:49:51.648468 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-02 00:49:51.648479 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-02 00:49:51.648491 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-02 00:49:51.648502 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-02 00:49:51.648513 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-02 00:49:51.648524 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-02 00:49:51.648535 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-02 00:49:51.648547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-02 00:49:51.648558 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-02 00:49:51.648576 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-02 00:49:51.648587 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-02 00:49:51.648598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-02 00:49:51.648609 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-02 00:49:51.648620 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-02 00:49:51.648631 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-02 00:49:51.648641 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-02 00:49:51.648652 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-02 00:49:51.648663 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-02 00:49:51.648674 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-02 00:49:51.648685 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-02 00:49:51.648696 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-02 00:49:51.648707 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-02 00:49:51.648718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-02 00:49:51.648729 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-02 00:49:51.648740 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-02 00:49:51.648751 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-02 00:49:51.648762 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-02 00:49:51.648773 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-02 00:49:51.648784 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-02 00:49:51.648795 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-02 00:49:51.648806 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-02 00:49:51.648817 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-02 00:49:51.648828 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-02 00:49:51.648844 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-02 00:49:51.648861 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-02 00:49:51.648872 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-02 00:49:51.648883 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-02 00:49:51.648894 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-02 00:49:51.648912 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-02 00:49:51.648923 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-02 00:49:51.648934 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-02 00:49:51.648945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-02 00:49:51.648956 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-02 00:49:51.648967 | orchestrator | 2025-09-02 00:49:51.648978 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-02 00:49:51.648990 | orchestrator | Tuesday 02 September 2025 00:47:59 +0000 (0:00:20.200) 0:00:34.987 ***** 2025-09-02 00:49:51.649001 | orchestrator | 2025-09-02 00:49:51.649012 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-02 00:49:51.649023 | orchestrator | Tuesday 02 September 2025 00:48:00 +0000 (0:00:00.556) 0:00:35.544 ***** 2025-09-02 00:49:51.649034 | orchestrator | 2025-09-02 00:49:51.649044 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-02 00:49:51.649055 | orchestrator | Tuesday 02 September 2025 00:48:00 +0000 (0:00:00.074) 0:00:35.618 ***** 2025-09-02 00:49:51.649066 | orchestrator | 2025-09-02 00:49:51.649077 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-02 00:49:51.649088 | orchestrator | Tuesday 02 September 2025 00:48:00 +0000 (0:00:00.085) 0:00:35.704 ***** 2025-09-02 00:49:51.649099 | orchestrator | 2025-09-02 00:49:51.649110 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-02 00:49:51.649136 | orchestrator | Tuesday 02 September 2025 00:48:00 +0000 (0:00:00.080) 0:00:35.784 ***** 2025-09-02 00:49:51.649147 | orchestrator | 2025-09-02 00:49:51.649158 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-02 00:49:51.649169 | orchestrator | Tuesday 02 September 2025 00:48:00 +0000 (0:00:00.082) 0:00:35.867 ***** 2025-09-02 00:49:51.649180 | orchestrator | 2025-09-02 00:49:51.649191 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-02 00:49:51.649202 | orchestrator | Tuesday 02 September 2025 00:48:00 +0000 (0:00:00.076) 0:00:35.944 ***** 2025-09-02 00:49:51.649213 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:49:51.649224 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:49:51.649235 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.649246 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.649256 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:49:51.649267 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.649278 | orchestrator | 2025-09-02 00:49:51.649289 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-02 00:49:51.649300 | orchestrator | Tuesday 02 September 2025 00:48:02 +0000 (0:00:01.968) 0:00:37.912 ***** 2025-09-02 00:49:51.649311 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:49:51.649322 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:49:51.649333 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:49:51.649344 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:49:51.649355 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:49:51.649365 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:49:51.649376 | orchestrator | 2025-09-02 00:49:51.649387 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-02 00:49:51.649398 | orchestrator | 2025-09-02 00:49:51.649409 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-02 00:49:51.649420 | orchestrator | Tuesday 02 September 2025 00:48:33 +0000 (0:00:30.926) 0:01:08.839 ***** 2025-09-02 00:49:51.649431 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:49:51.649448 | orchestrator | 2025-09-02 00:49:51.649459 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-02 00:49:51.649470 | orchestrator | Tuesday 02 September 2025 00:48:34 +0000 (0:00:00.769) 0:01:09.609 ***** 2025-09-02 00:49:51.649481 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:49:51.649492 | orchestrator | 2025-09-02 00:49:51.649503 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-02 00:49:51.649514 | orchestrator | Tuesday 02 September 2025 00:48:34 +0000 (0:00:00.583) 0:01:10.192 ***** 2025-09-02 00:49:51.649526 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.649536 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.649547 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.649558 | orchestrator | 2025-09-02 00:49:51.649569 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-02 00:49:51.649581 | orchestrator | Tuesday 02 September 2025 00:48:35 +0000 (0:00:01.187) 0:01:11.380 ***** 2025-09-02 00:49:51.649591 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.649602 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.649614 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.649630 | orchestrator | 2025-09-02 00:49:51.649641 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-02 00:49:51.649653 | orchestrator | Tuesday 02 September 2025 00:48:36 +0000 (0:00:00.345) 0:01:11.725 ***** 2025-09-02 00:49:51.649664 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.649675 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.649686 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.649696 | orchestrator | 2025-09-02 00:49:51.649707 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-02 00:49:51.649718 | orchestrator | Tuesday 02 September 2025 00:48:36 +0000 (0:00:00.344) 0:01:12.069 ***** 2025-09-02 00:49:51.649729 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.649740 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.649751 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.649762 | orchestrator | 2025-09-02 00:49:51.649773 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-02 00:49:51.649784 | orchestrator | Tuesday 02 September 2025 00:48:37 +0000 (0:00:00.321) 0:01:12.391 ***** 2025-09-02 00:49:51.649795 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.649806 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.649817 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.649828 | orchestrator | 2025-09-02 00:49:51.649839 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-02 00:49:51.649850 | orchestrator | Tuesday 02 September 2025 00:48:37 +0000 (0:00:00.500) 0:01:12.891 ***** 2025-09-02 00:49:51.649861 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.649872 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.649883 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.649894 | orchestrator | 2025-09-02 00:49:51.649905 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-02 00:49:51.649916 | orchestrator | Tuesday 02 September 2025 00:48:37 +0000 (0:00:00.343) 0:01:13.234 ***** 2025-09-02 00:49:51.649927 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.649938 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.649949 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.649959 | orchestrator | 2025-09-02 00:49:51.649971 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-02 00:49:51.649982 | orchestrator | Tuesday 02 September 2025 00:48:38 +0000 (0:00:00.296) 0:01:13.530 ***** 2025-09-02 00:49:51.649993 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650004 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650015 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650076 | orchestrator | 2025-09-02 00:49:51.650088 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-02 00:49:51.650113 | orchestrator | Tuesday 02 September 2025 00:48:38 +0000 (0:00:00.315) 0:01:13.846 ***** 2025-09-02 00:49:51.650140 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650162 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650173 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650185 | orchestrator | 2025-09-02 00:49:51.650196 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-02 00:49:51.650207 | orchestrator | Tuesday 02 September 2025 00:48:39 +0000 (0:00:00.603) 0:01:14.450 ***** 2025-09-02 00:49:51.650217 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650229 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650239 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650250 | orchestrator | 2025-09-02 00:49:51.650261 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-02 00:49:51.650272 | orchestrator | Tuesday 02 September 2025 00:48:39 +0000 (0:00:00.307) 0:01:14.757 ***** 2025-09-02 00:49:51.650282 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650293 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650304 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650315 | orchestrator | 2025-09-02 00:49:51.650326 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-02 00:49:51.650337 | orchestrator | Tuesday 02 September 2025 00:48:39 +0000 (0:00:00.358) 0:01:15.115 ***** 2025-09-02 00:49:51.650348 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650359 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650369 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650380 | orchestrator | 2025-09-02 00:49:51.650391 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-02 00:49:51.650402 | orchestrator | Tuesday 02 September 2025 00:48:40 +0000 (0:00:00.365) 0:01:15.480 ***** 2025-09-02 00:49:51.650413 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650424 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650435 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650445 | orchestrator | 2025-09-02 00:49:51.650456 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-02 00:49:51.650467 | orchestrator | Tuesday 02 September 2025 00:48:40 +0000 (0:00:00.304) 0:01:15.785 ***** 2025-09-02 00:49:51.650550 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650572 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650583 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650594 | orchestrator | 2025-09-02 00:49:51.650605 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-02 00:49:51.650616 | orchestrator | Tuesday 02 September 2025 00:48:40 +0000 (0:00:00.493) 0:01:16.279 ***** 2025-09-02 00:49:51.650627 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650638 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650649 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650659 | orchestrator | 2025-09-02 00:49:51.650670 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-02 00:49:51.650681 | orchestrator | Tuesday 02 September 2025 00:48:41 +0000 (0:00:00.311) 0:01:16.591 ***** 2025-09-02 00:49:51.650692 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650702 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650713 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650724 | orchestrator | 2025-09-02 00:49:51.650735 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-02 00:49:51.650745 | orchestrator | Tuesday 02 September 2025 00:48:41 +0000 (0:00:00.278) 0:01:16.869 ***** 2025-09-02 00:49:51.650756 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.650767 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.650788 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.650799 | orchestrator | 2025-09-02 00:49:51.650811 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-02 00:49:51.650827 | orchestrator | Tuesday 02 September 2025 00:48:41 +0000 (0:00:00.283) 0:01:17.153 ***** 2025-09-02 00:49:51.650847 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:49:51.650858 | orchestrator | 2025-09-02 00:49:51.650870 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-02 00:49:51.650881 | orchestrator | Tuesday 02 September 2025 00:48:42 +0000 (0:00:00.805) 0:01:17.958 ***** 2025-09-02 00:49:51.650892 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.650903 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.650914 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.650925 | orchestrator | 2025-09-02 00:49:51.650936 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-02 00:49:51.650948 | orchestrator | Tuesday 02 September 2025 00:48:43 +0000 (0:00:00.610) 0:01:18.569 ***** 2025-09-02 00:49:51.650959 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.650970 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.650981 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.650991 | orchestrator | 2025-09-02 00:49:51.651002 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-02 00:49:51.651014 | orchestrator | Tuesday 02 September 2025 00:48:43 +0000 (0:00:00.468) 0:01:19.038 ***** 2025-09-02 00:49:51.651025 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.651036 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.651047 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.651058 | orchestrator | 2025-09-02 00:49:51.651069 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-02 00:49:51.651080 | orchestrator | Tuesday 02 September 2025 00:48:44 +0000 (0:00:00.526) 0:01:19.564 ***** 2025-09-02 00:49:51.651091 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.651102 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.651113 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.651150 | orchestrator | 2025-09-02 00:49:51.651162 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-02 00:49:51.651173 | orchestrator | Tuesday 02 September 2025 00:48:44 +0000 (0:00:00.348) 0:01:19.913 ***** 2025-09-02 00:49:51.651184 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.651195 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.651206 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.651217 | orchestrator | 2025-09-02 00:49:51.651228 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-02 00:49:51.651239 | orchestrator | Tuesday 02 September 2025 00:48:44 +0000 (0:00:00.334) 0:01:20.247 ***** 2025-09-02 00:49:51.651250 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.651261 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.651300 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.651312 | orchestrator | 2025-09-02 00:49:51.651323 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-02 00:49:51.651334 | orchestrator | Tuesday 02 September 2025 00:48:45 +0000 (0:00:00.354) 0:01:20.602 ***** 2025-09-02 00:49:51.651345 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.651356 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.651367 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.651378 | orchestrator | 2025-09-02 00:49:51.651389 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-02 00:49:51.651400 | orchestrator | Tuesday 02 September 2025 00:48:45 +0000 (0:00:00.535) 0:01:21.137 ***** 2025-09-02 00:49:51.651411 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.651421 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.651432 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.651443 | orchestrator | 2025-09-02 00:49:51.651454 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-02 00:49:51.651465 | orchestrator | Tuesday 02 September 2025 00:48:46 +0000 (0:00:00.333) 0:01:21.471 ***** 2025-09-02 00:49:51.651477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651618 | orchestrator | 2025-09-02 00:49:51.651630 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-02 00:49:51.651641 | orchestrator | Tuesday 02 September 2025 00:48:47 +0000 (0:00:01.519) 0:01:22.991 ***** 2025-09-02 00:49:51.651653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651776 | orchestrator | 2025-09-02 00:49:51.651788 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-02 00:49:51.651799 | orchestrator | Tuesday 02 September 2025 00:48:51 +0000 (0:00:04.186) 0:01:27.177 ***** 2025-09-02 00:49:51.651810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.651934 | orchestrator | 2025-09-02 00:49:51.651945 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-02 00:49:51.651956 | orchestrator | Tuesday 02 September 2025 00:48:53 +0000 (0:00:02.081) 0:01:29.259 ***** 2025-09-02 00:49:51.651968 | orchestrator | 2025-09-02 00:49:51.651979 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-02 00:49:51.651990 | orchestrator | Tuesday 02 September 2025 00:48:54 +0000 (0:00:00.277) 0:01:29.536 ***** 2025-09-02 00:49:51.652007 | orchestrator | 2025-09-02 00:49:51.652018 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-02 00:49:51.652029 | orchestrator | Tuesday 02 September 2025 00:48:54 +0000 (0:00:00.064) 0:01:29.601 ***** 2025-09-02 00:49:51.652040 | orchestrator | 2025-09-02 00:49:51.652051 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-02 00:49:51.652061 | orchestrator | Tuesday 02 September 2025 00:48:54 +0000 (0:00:00.065) 0:01:29.666 ***** 2025-09-02 00:49:51.652072 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:49:51.652083 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:49:51.652094 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:49:51.652105 | orchestrator | 2025-09-02 00:49:51.652132 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-02 00:49:51.652143 | orchestrator | Tuesday 02 September 2025 00:49:01 +0000 (0:00:07.550) 0:01:37.217 ***** 2025-09-02 00:49:51.652154 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:49:51.652165 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:49:51.652176 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:49:51.652187 | orchestrator | 2025-09-02 00:49:51.652198 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-02 00:49:51.652209 | orchestrator | Tuesday 02 September 2025 00:49:08 +0000 (0:00:06.600) 0:01:43.817 ***** 2025-09-02 00:49:51.652220 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:49:51.652231 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:49:51.652242 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:49:51.652253 | orchestrator | 2025-09-02 00:49:51.652264 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-02 00:49:51.652275 | orchestrator | Tuesday 02 September 2025 00:49:16 +0000 (0:00:07.678) 0:01:51.496 ***** 2025-09-02 00:49:51.652286 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.652297 | orchestrator | 2025-09-02 00:49:51.652307 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-02 00:49:51.652319 | orchestrator | Tuesday 02 September 2025 00:49:16 +0000 (0:00:00.140) 0:01:51.637 ***** 2025-09-02 00:49:51.652330 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.652341 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.652352 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.652362 | orchestrator | 2025-09-02 00:49:51.652373 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-02 00:49:51.652384 | orchestrator | Tuesday 02 September 2025 00:49:17 +0000 (0:00:01.115) 0:01:52.753 ***** 2025-09-02 00:49:51.652395 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.652406 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.652417 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:49:51.652427 | orchestrator | 2025-09-02 00:49:51.652439 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-02 00:49:51.652450 | orchestrator | Tuesday 02 September 2025 00:49:18 +0000 (0:00:00.680) 0:01:53.433 ***** 2025-09-02 00:49:51.652461 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.652471 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.652482 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.652493 | orchestrator | 2025-09-02 00:49:51.652504 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-02 00:49:51.652515 | orchestrator | Tuesday 02 September 2025 00:49:18 +0000 (0:00:00.798) 0:01:54.232 ***** 2025-09-02 00:49:51.652526 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.652537 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.652548 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:49:51.652558 | orchestrator | 2025-09-02 00:49:51.652569 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-02 00:49:51.652580 | orchestrator | Tuesday 02 September 2025 00:49:19 +0000 (0:00:00.673) 0:01:54.905 ***** 2025-09-02 00:49:51.652591 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.652602 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.652629 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.652641 | orchestrator | 2025-09-02 00:49:51.652652 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-02 00:49:51.652668 | orchestrator | Tuesday 02 September 2025 00:49:20 +0000 (0:00:01.224) 0:01:56.130 ***** 2025-09-02 00:49:51.652679 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.652690 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.652701 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.652712 | orchestrator | 2025-09-02 00:49:51.652723 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-02 00:49:51.652734 | orchestrator | Tuesday 02 September 2025 00:49:21 +0000 (0:00:01.016) 0:01:57.147 ***** 2025-09-02 00:49:51.652745 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.652756 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.652767 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.652777 | orchestrator | 2025-09-02 00:49:51.652788 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-02 00:49:51.652799 | orchestrator | Tuesday 02 September 2025 00:49:22 +0000 (0:00:00.513) 0:01:57.660 ***** 2025-09-02 00:49:51.652810 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652822 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652833 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652845 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652856 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652868 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652890 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652915 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652927 | orchestrator | 2025-09-02 00:49:51.652944 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-02 00:49:51.652956 | orchestrator | Tuesday 02 September 2025 00:49:23 +0000 (0:00:01.565) 0:01:59.226 ***** 2025-09-02 00:49:51.652967 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652978 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.652990 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653001 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653035 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653082 | orchestrator | 2025-09-02 00:49:51.653094 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-02 00:49:51.653105 | orchestrator | Tuesday 02 September 2025 00:49:28 +0000 (0:00:04.314) 0:02:03.541 ***** 2025-09-02 00:49:51.653175 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653189 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653201 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653212 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653258 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 00:49:51.653288 | orchestrator | 2025-09-02 00:49:51.653299 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-02 00:49:51.653310 | orchestrator | Tuesday 02 September 2025 00:49:31 +0000 (0:00:02.908) 0:02:06.449 ***** 2025-09-02 00:49:51.653321 | orchestrator | 2025-09-02 00:49:51.653333 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-02 00:49:51.653344 | orchestrator | Tuesday 02 September 2025 00:49:31 +0000 (0:00:00.068) 0:02:06.517 ***** 2025-09-02 00:49:51.653355 | orchestrator | 2025-09-02 00:49:51.653365 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-02 00:49:51.653374 | orchestrator | Tuesday 02 September 2025 00:49:31 +0000 (0:00:00.072) 0:02:06.589 ***** 2025-09-02 00:49:51.653384 | orchestrator | 2025-09-02 00:49:51.653394 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-02 00:49:51.653404 | orchestrator | Tuesday 02 September 2025 00:49:31 +0000 (0:00:00.070) 0:02:06.660 ***** 2025-09-02 00:49:51.653413 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:49:51.653423 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:49:51.653433 | orchestrator | 2025-09-02 00:49:51.653449 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-02 00:49:51.653459 | orchestrator | Tuesday 02 September 2025 00:49:37 +0000 (0:00:06.187) 0:02:12.848 ***** 2025-09-02 00:49:51.653474 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:49:51.653484 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:49:51.653494 | orchestrator | 2025-09-02 00:49:51.653503 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-02 00:49:51.653513 | orchestrator | Tuesday 02 September 2025 00:49:43 +0000 (0:00:06.329) 0:02:19.177 ***** 2025-09-02 00:49:51.653523 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:49:51.653532 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:49:51.653542 | orchestrator | 2025-09-02 00:49:51.653552 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-02 00:49:51.653562 | orchestrator | Tuesday 02 September 2025 00:49:45 +0000 (0:00:01.567) 0:02:20.745 ***** 2025-09-02 00:49:51.653572 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:49:51.653581 | orchestrator | 2025-09-02 00:49:51.653591 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-02 00:49:51.653601 | orchestrator | Tuesday 02 September 2025 00:49:45 +0000 (0:00:00.139) 0:02:20.884 ***** 2025-09-02 00:49:51.653610 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.653620 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.653630 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.653639 | orchestrator | 2025-09-02 00:49:51.653649 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-02 00:49:51.653659 | orchestrator | Tuesday 02 September 2025 00:49:46 +0000 (0:00:00.803) 0:02:21.688 ***** 2025-09-02 00:49:51.653669 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.653679 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.653688 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:49:51.653698 | orchestrator | 2025-09-02 00:49:51.653708 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-02 00:49:51.653717 | orchestrator | Tuesday 02 September 2025 00:49:46 +0000 (0:00:00.617) 0:02:22.305 ***** 2025-09-02 00:49:51.653727 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.653743 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.653753 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.653762 | orchestrator | 2025-09-02 00:49:51.653772 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-02 00:49:51.653782 | orchestrator | Tuesday 02 September 2025 00:49:47 +0000 (0:00:00.836) 0:02:23.142 ***** 2025-09-02 00:49:51.653792 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:49:51.653802 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:49:51.653811 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:49:51.653821 | orchestrator | 2025-09-02 00:49:51.653831 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-02 00:49:51.653841 | orchestrator | Tuesday 02 September 2025 00:49:48 +0000 (0:00:00.863) 0:02:24.006 ***** 2025-09-02 00:49:51.653850 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.653860 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.653870 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.653879 | orchestrator | 2025-09-02 00:49:51.653889 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-02 00:49:51.653899 | orchestrator | Tuesday 02 September 2025 00:49:49 +0000 (0:00:00.802) 0:02:24.808 ***** 2025-09-02 00:49:51.653909 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:49:51.653918 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:49:51.653928 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:49:51.653938 | orchestrator | 2025-09-02 00:49:51.653947 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:49:51.653957 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-02 00:49:51.653968 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-02 00:49:51.653978 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-02 00:49:51.653988 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:49:51.653998 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:49:51.654007 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:49:51.654065 | orchestrator | 2025-09-02 00:49:51.654078 | orchestrator | 2025-09-02 00:49:51.654088 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:49:51.654097 | orchestrator | Tuesday 02 September 2025 00:49:50 +0000 (0:00:00.947) 0:02:25.756 ***** 2025-09-02 00:49:51.654108 | orchestrator | =============================================================================== 2025-09-02 00:49:51.654131 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.93s 2025-09-02 00:49:51.654141 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.20s 2025-09-02 00:49:51.654151 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.74s 2025-09-02 00:49:51.654161 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.93s 2025-09-02 00:49:51.654170 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.25s 2025-09-02 00:49:51.654180 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.31s 2025-09-02 00:49:51.654190 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.19s 2025-09-02 00:49:51.654206 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.91s 2025-09-02 00:49:51.654217 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.80s 2025-09-02 00:49:51.654232 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.24s 2025-09-02 00:49:51.654248 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.22s 2025-09-02 00:49:51.654258 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.08s 2025-09-02 00:49:51.654268 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.97s 2025-09-02 00:49:51.654278 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.70s 2025-09-02 00:49:51.654287 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2025-09-02 00:49:51.654297 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.52s 2025-09-02 00:49:51.654307 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.40s 2025-09-02 00:49:51.654316 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.40s 2025-09-02 00:49:51.654326 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.23s 2025-09-02 00:49:51.654335 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.22s 2025-09-02 00:49:51.654345 | orchestrator | 2025-09-02 00:49:51 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:51.654355 | orchestrator | 2025-09-02 00:49:51 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:51.654365 | orchestrator | 2025-09-02 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:54.699525 | orchestrator | 2025-09-02 00:49:54 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:54.701720 | orchestrator | 2025-09-02 00:49:54 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:54.701749 | orchestrator | 2025-09-02 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:49:57.750549 | orchestrator | 2025-09-02 00:49:57 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:49:57.750826 | orchestrator | 2025-09-02 00:49:57 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:49:57.750933 | orchestrator | 2025-09-02 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:00.806600 | orchestrator | 2025-09-02 00:50:00 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:00.809805 | orchestrator | 2025-09-02 00:50:00 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:00.810554 | orchestrator | 2025-09-02 00:50:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:03.860618 | orchestrator | 2025-09-02 00:50:03 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:03.862241 | orchestrator | 2025-09-02 00:50:03 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:03.862278 | orchestrator | 2025-09-02 00:50:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:06.916299 | orchestrator | 2025-09-02 00:50:06 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:06.918620 | orchestrator | 2025-09-02 00:50:06 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:06.918654 | orchestrator | 2025-09-02 00:50:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:09.969963 | orchestrator | 2025-09-02 00:50:09 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:09.971012 | orchestrator | 2025-09-02 00:50:09 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:09.971043 | orchestrator | 2025-09-02 00:50:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:13.019286 | orchestrator | 2025-09-02 00:50:13 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:13.021413 | orchestrator | 2025-09-02 00:50:13 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:13.022174 | orchestrator | 2025-09-02 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:16.067791 | orchestrator | 2025-09-02 00:50:16 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:16.069144 | orchestrator | 2025-09-02 00:50:16 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:16.069788 | orchestrator | 2025-09-02 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:19.103314 | orchestrator | 2025-09-02 00:50:19 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:19.105124 | orchestrator | 2025-09-02 00:50:19 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:19.105185 | orchestrator | 2025-09-02 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:22.148237 | orchestrator | 2025-09-02 00:50:22 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:22.148664 | orchestrator | 2025-09-02 00:50:22 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:22.148695 | orchestrator | 2025-09-02 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:25.188459 | orchestrator | 2025-09-02 00:50:25 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:25.189767 | orchestrator | 2025-09-02 00:50:25 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:25.189797 | orchestrator | 2025-09-02 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:28.233577 | orchestrator | 2025-09-02 00:50:28 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:28.233676 | orchestrator | 2025-09-02 00:50:28 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:28.233692 | orchestrator | 2025-09-02 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:31.261853 | orchestrator | 2025-09-02 00:50:31 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:31.265924 | orchestrator | 2025-09-02 00:50:31 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:31.265963 | orchestrator | 2025-09-02 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:34.298327 | orchestrator | 2025-09-02 00:50:34 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:34.299366 | orchestrator | 2025-09-02 00:50:34 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:34.299443 | orchestrator | 2025-09-02 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:37.349335 | orchestrator | 2025-09-02 00:50:37 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:37.350895 | orchestrator | 2025-09-02 00:50:37 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:37.350929 | orchestrator | 2025-09-02 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:40.381934 | orchestrator | 2025-09-02 00:50:40 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:40.382908 | orchestrator | 2025-09-02 00:50:40 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:40.382972 | orchestrator | 2025-09-02 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:43.425644 | orchestrator | 2025-09-02 00:50:43 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:43.426671 | orchestrator | 2025-09-02 00:50:43 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:43.426762 | orchestrator | 2025-09-02 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:46.466107 | orchestrator | 2025-09-02 00:50:46 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:46.466361 | orchestrator | 2025-09-02 00:50:46 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:46.466587 | orchestrator | 2025-09-02 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:49.517570 | orchestrator | 2025-09-02 00:50:49 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:49.519901 | orchestrator | 2025-09-02 00:50:49 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:49.519996 | orchestrator | 2025-09-02 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:52.559134 | orchestrator | 2025-09-02 00:50:52 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:52.561454 | orchestrator | 2025-09-02 00:50:52 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:52.561899 | orchestrator | 2025-09-02 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:55.603890 | orchestrator | 2025-09-02 00:50:55 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:55.604374 | orchestrator | 2025-09-02 00:50:55 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:55.604426 | orchestrator | 2025-09-02 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:50:58.658248 | orchestrator | 2025-09-02 00:50:58 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:50:58.660629 | orchestrator | 2025-09-02 00:50:58 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:50:58.660660 | orchestrator | 2025-09-02 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:01.708627 | orchestrator | 2025-09-02 00:51:01 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:01.710678 | orchestrator | 2025-09-02 00:51:01 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:01.710711 | orchestrator | 2025-09-02 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:04.757761 | orchestrator | 2025-09-02 00:51:04 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:04.759404 | orchestrator | 2025-09-02 00:51:04 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:04.759435 | orchestrator | 2025-09-02 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:07.805600 | orchestrator | 2025-09-02 00:51:07 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:07.806498 | orchestrator | 2025-09-02 00:51:07 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:07.806527 | orchestrator | 2025-09-02 00:51:07 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:10.859735 | orchestrator | 2025-09-02 00:51:10 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:10.860729 | orchestrator | 2025-09-02 00:51:10 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:10.860758 | orchestrator | 2025-09-02 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:13.904768 | orchestrator | 2025-09-02 00:51:13 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:13.907772 | orchestrator | 2025-09-02 00:51:13 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:13.907851 | orchestrator | 2025-09-02 00:51:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:16.960516 | orchestrator | 2025-09-02 00:51:16 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:16.962156 | orchestrator | 2025-09-02 00:51:16 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:16.962193 | orchestrator | 2025-09-02 00:51:16 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:20.016875 | orchestrator | 2025-09-02 00:51:20 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:20.018838 | orchestrator | 2025-09-02 00:51:20 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:20.018898 | orchestrator | 2025-09-02 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:23.064441 | orchestrator | 2025-09-02 00:51:23 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:23.065167 | orchestrator | 2025-09-02 00:51:23 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:23.065368 | orchestrator | 2025-09-02 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:26.116442 | orchestrator | 2025-09-02 00:51:26 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:26.122826 | orchestrator | 2025-09-02 00:51:26 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:26.122866 | orchestrator | 2025-09-02 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:29.167362 | orchestrator | 2025-09-02 00:51:29 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:29.169001 | orchestrator | 2025-09-02 00:51:29 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:29.169040 | orchestrator | 2025-09-02 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:32.212993 | orchestrator | 2025-09-02 00:51:32 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:32.214770 | orchestrator | 2025-09-02 00:51:32 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:32.214821 | orchestrator | 2025-09-02 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:35.253583 | orchestrator | 2025-09-02 00:51:35 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:35.253708 | orchestrator | 2025-09-02 00:51:35 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:35.253724 | orchestrator | 2025-09-02 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:38.304599 | orchestrator | 2025-09-02 00:51:38 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:38.305518 | orchestrator | 2025-09-02 00:51:38 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:38.305601 | orchestrator | 2025-09-02 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:41.353872 | orchestrator | 2025-09-02 00:51:41 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:41.355165 | orchestrator | 2025-09-02 00:51:41 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:41.355660 | orchestrator | 2025-09-02 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:44.406543 | orchestrator | 2025-09-02 00:51:44 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:44.408331 | orchestrator | 2025-09-02 00:51:44 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:44.408741 | orchestrator | 2025-09-02 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:47.444065 | orchestrator | 2025-09-02 00:51:47 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:47.444352 | orchestrator | 2025-09-02 00:51:47 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:47.444492 | orchestrator | 2025-09-02 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:50.479590 | orchestrator | 2025-09-02 00:51:50 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:50.479683 | orchestrator | 2025-09-02 00:51:50 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:50.479699 | orchestrator | 2025-09-02 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:53.523758 | orchestrator | 2025-09-02 00:51:53 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:53.523841 | orchestrator | 2025-09-02 00:51:53 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:53.523851 | orchestrator | 2025-09-02 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:56.571872 | orchestrator | 2025-09-02 00:51:56 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:56.573819 | orchestrator | 2025-09-02 00:51:56 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:56.573892 | orchestrator | 2025-09-02 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:51:59.619075 | orchestrator | 2025-09-02 00:51:59 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:51:59.620457 | orchestrator | 2025-09-02 00:51:59 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:51:59.620611 | orchestrator | 2025-09-02 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:02.655477 | orchestrator | 2025-09-02 00:52:02 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:02.656685 | orchestrator | 2025-09-02 00:52:02 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:02.656974 | orchestrator | 2025-09-02 00:52:02 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:05.699611 | orchestrator | 2025-09-02 00:52:05 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:05.700346 | orchestrator | 2025-09-02 00:52:05 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:05.700381 | orchestrator | 2025-09-02 00:52:05 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:08.754674 | orchestrator | 2025-09-02 00:52:08 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:08.757897 | orchestrator | 2025-09-02 00:52:08 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:08.757985 | orchestrator | 2025-09-02 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:11.803098 | orchestrator | 2025-09-02 00:52:11 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:11.805060 | orchestrator | 2025-09-02 00:52:11 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:11.805094 | orchestrator | 2025-09-02 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:14.845565 | orchestrator | 2025-09-02 00:52:14 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:14.845827 | orchestrator | 2025-09-02 00:52:14 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:14.845897 | orchestrator | 2025-09-02 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:17.886888 | orchestrator | 2025-09-02 00:52:17 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:17.888512 | orchestrator | 2025-09-02 00:52:17 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:17.888722 | orchestrator | 2025-09-02 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:20.924510 | orchestrator | 2025-09-02 00:52:20 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:20.925647 | orchestrator | 2025-09-02 00:52:20 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:20.926188 | orchestrator | 2025-09-02 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:23.972358 | orchestrator | 2025-09-02 00:52:23 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:23.974481 | orchestrator | 2025-09-02 00:52:23 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:23.974558 | orchestrator | 2025-09-02 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:27.029706 | orchestrator | 2025-09-02 00:52:27 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:27.030101 | orchestrator | 2025-09-02 00:52:27 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:27.030636 | orchestrator | 2025-09-02 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:30.068427 | orchestrator | 2025-09-02 00:52:30 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:30.070276 | orchestrator | 2025-09-02 00:52:30 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:30.070825 | orchestrator | 2025-09-02 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:33.115917 | orchestrator | 2025-09-02 00:52:33 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:33.118233 | orchestrator | 2025-09-02 00:52:33 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:33.118262 | orchestrator | 2025-09-02 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:36.175488 | orchestrator | 2025-09-02 00:52:36 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:36.176940 | orchestrator | 2025-09-02 00:52:36 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:36.177277 | orchestrator | 2025-09-02 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:39.233957 | orchestrator | 2025-09-02 00:52:39 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:39.235216 | orchestrator | 2025-09-02 00:52:39 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state STARTED 2025-09-02 00:52:39.235248 | orchestrator | 2025-09-02 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:42.280879 | orchestrator | 2025-09-02 00:52:42 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:42.293041 | orchestrator | 2025-09-02 00:52:42 | INFO  | Task b40ad5d9-1700-4e38-b997-7ab7b0c4f741 is in state SUCCESS 2025-09-02 00:52:42.295362 | orchestrator | 2025-09-02 00:52:42.295442 | orchestrator | 2025-09-02 00:52:42.295498 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:52:42.295514 | orchestrator | 2025-09-02 00:52:42.295592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:52:42.295607 | orchestrator | Tuesday 02 September 2025 00:46:04 +0000 (0:00:00.539) 0:00:00.539 ***** 2025-09-02 00:52:42.295618 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.295631 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.295642 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.295654 | orchestrator | 2025-09-02 00:52:42.295665 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:52:42.295677 | orchestrator | Tuesday 02 September 2025 00:46:05 +0000 (0:00:00.474) 0:00:01.013 ***** 2025-09-02 00:52:42.295710 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-02 00:52:42.295723 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-02 00:52:42.295734 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-02 00:52:42.295745 | orchestrator | 2025-09-02 00:52:42.295756 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-02 00:52:42.295767 | orchestrator | 2025-09-02 00:52:42.295778 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-02 00:52:42.295789 | orchestrator | Tuesday 02 September 2025 00:46:05 +0000 (0:00:00.520) 0:00:01.534 ***** 2025-09-02 00:52:42.295800 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.295813 | orchestrator | 2025-09-02 00:52:42.295824 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-02 00:52:42.295835 | orchestrator | Tuesday 02 September 2025 00:46:06 +0000 (0:00:00.577) 0:00:02.111 ***** 2025-09-02 00:52:42.295846 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.295858 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.295869 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.295880 | orchestrator | 2025-09-02 00:52:42.295891 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-02 00:52:42.295902 | orchestrator | Tuesday 02 September 2025 00:46:07 +0000 (0:00:00.825) 0:00:02.936 ***** 2025-09-02 00:52:42.295913 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.295924 | orchestrator | 2025-09-02 00:52:42.295938 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-02 00:52:42.295952 | orchestrator | Tuesday 02 September 2025 00:46:08 +0000 (0:00:00.991) 0:00:03.928 ***** 2025-09-02 00:52:42.295964 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.295976 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.295989 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.296001 | orchestrator | 2025-09-02 00:52:42.296060 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-02 00:52:42.296074 | orchestrator | Tuesday 02 September 2025 00:46:08 +0000 (0:00:00.704) 0:00:04.632 ***** 2025-09-02 00:52:42.296086 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-02 00:52:42.296099 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-02 00:52:42.296197 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-02 00:52:42.296242 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-02 00:52:42.296256 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-02 00:52:42.296269 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-02 00:52:42.296282 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-02 00:52:42.296296 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-02 00:52:42.296328 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-02 00:52:42.296340 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-02 00:52:42.296351 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-02 00:52:42.296362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-02 00:52:42.296373 | orchestrator | 2025-09-02 00:52:42.296384 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-02 00:52:42.296395 | orchestrator | Tuesday 02 September 2025 00:46:12 +0000 (0:00:04.066) 0:00:08.699 ***** 2025-09-02 00:52:42.296406 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-02 00:52:42.296417 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-02 00:52:42.296429 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-02 00:52:42.296440 | orchestrator | 2025-09-02 00:52:42.296451 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-02 00:52:42.296462 | orchestrator | Tuesday 02 September 2025 00:46:14 +0000 (0:00:01.330) 0:00:10.029 ***** 2025-09-02 00:52:42.296473 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-02 00:52:42.296511 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-02 00:52:42.296566 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-02 00:52:42.296579 | orchestrator | 2025-09-02 00:52:42.296590 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-02 00:52:42.296601 | orchestrator | Tuesday 02 September 2025 00:46:16 +0000 (0:00:01.855) 0:00:11.886 ***** 2025-09-02 00:52:42.296612 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-02 00:52:42.296624 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.296686 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-02 00:52:42.296698 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.296709 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-02 00:52:42.296720 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.296731 | orchestrator | 2025-09-02 00:52:42.296743 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-02 00:52:42.296754 | orchestrator | Tuesday 02 September 2025 00:46:17 +0000 (0:00:00.919) 0:00:12.806 ***** 2025-09-02 00:52:42.296805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.296828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.296850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.296863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.296875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.296894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.296906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.296919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.296931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.296950 | orchestrator | 2025-09-02 00:52:42.296961 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-02 00:52:42.296972 | orchestrator | Tuesday 02 September 2025 00:46:20 +0000 (0:00:03.000) 0:00:15.806 ***** 2025-09-02 00:52:42.296983 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.296994 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.297005 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.297016 | orchestrator | 2025-09-02 00:52:42.297027 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-02 00:52:42.297038 | orchestrator | Tuesday 02 September 2025 00:46:22 +0000 (0:00:02.188) 0:00:17.995 ***** 2025-09-02 00:52:42.297050 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-02 00:52:42.297060 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-02 00:52:42.297071 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-02 00:52:42.297083 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-02 00:52:42.297093 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-02 00:52:42.297104 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-02 00:52:42.297115 | orchestrator | 2025-09-02 00:52:42.297126 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-02 00:52:42.297137 | orchestrator | Tuesday 02 September 2025 00:46:26 +0000 (0:00:04.082) 0:00:22.078 ***** 2025-09-02 00:52:42.297148 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.297159 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.297170 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.297182 | orchestrator | 2025-09-02 00:52:42.297193 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-02 00:52:42.297204 | orchestrator | Tuesday 02 September 2025 00:46:27 +0000 (0:00:01.383) 0:00:23.461 ***** 2025-09-02 00:52:42.297215 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.297226 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.297272 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.297284 | orchestrator | 2025-09-02 00:52:42.297295 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-02 00:52:42.297335 | orchestrator | Tuesday 02 September 2025 00:46:30 +0000 (0:00:02.342) 0:00:25.804 ***** 2025-09-02 00:52:42.297544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.297583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.297610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.297624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-02 00:52:42.297636 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.297647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.297659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.297671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.297682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-02 00:52:42.297693 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.297713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.297741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.297753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.297765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-02 00:52:42.297776 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.297787 | orchestrator | 2025-09-02 00:52:42.297827 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-02 00:52:42.297927 | orchestrator | Tuesday 02 September 2025 00:46:31 +0000 (0:00:01.474) 0:00:27.279 ***** 2025-09-02 00:52:42.297941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.297953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.297983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.298224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-02 00:52:42.298236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.298259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-02 00:52:42.298386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.298423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474', '__omit_place_holder__d4a7165903124ef1b46737a099ee6b32bf7d0474'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-02 00:52:42.298435 | orchestrator | 2025-09-02 00:52:42.298446 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-02 00:52:42.298506 | orchestrator | Tuesday 02 September 2025 00:46:35 +0000 (0:00:04.110) 0:00:31.390 ***** 2025-09-02 00:52:42.298521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.298616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.298628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.298639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.298657 | orchestrator | 2025-09-02 00:52:42.298669 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-02 00:52:42.298680 | orchestrator | Tuesday 02 September 2025 00:46:39 +0000 (0:00:03.516) 0:00:34.906 ***** 2025-09-02 00:52:42.298691 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-02 00:52:42.298759 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-02 00:52:42.298773 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-02 00:52:42.298784 | orchestrator | 2025-09-02 00:52:42.298795 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-02 00:52:42.298806 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:02.751) 0:00:37.658 ***** 2025-09-02 00:52:42.298817 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-02 00:52:42.298828 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-02 00:52:42.298839 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-02 00:52:42.298850 | orchestrator | 2025-09-02 00:52:42.298876 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-02 00:52:42.298887 | orchestrator | Tuesday 02 September 2025 00:46:48 +0000 (0:00:06.985) 0:00:44.644 ***** 2025-09-02 00:52:42.298898 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.298909 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.298920 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.298931 | orchestrator | 2025-09-02 00:52:42.299053 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-02 00:52:42.299065 | orchestrator | Tuesday 02 September 2025 00:46:49 +0000 (0:00:00.856) 0:00:45.500 ***** 2025-09-02 00:52:42.299082 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-02 00:52:42.299094 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-02 00:52:42.299105 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-02 00:52:42.299116 | orchestrator | 2025-09-02 00:52:42.299127 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-02 00:52:42.299138 | orchestrator | Tuesday 02 September 2025 00:46:52 +0000 (0:00:02.576) 0:00:48.077 ***** 2025-09-02 00:52:42.299149 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-02 00:52:42.299159 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-02 00:52:42.299170 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-02 00:52:42.299181 | orchestrator | 2025-09-02 00:52:42.299192 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-02 00:52:42.299203 | orchestrator | Tuesday 02 September 2025 00:46:56 +0000 (0:00:03.742) 0:00:51.820 ***** 2025-09-02 00:52:42.299215 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-02 00:52:42.299226 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-02 00:52:42.299333 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-02 00:52:42.299348 | orchestrator | 2025-09-02 00:52:42.299359 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-02 00:52:42.299370 | orchestrator | Tuesday 02 September 2025 00:46:58 +0000 (0:00:02.065) 0:00:53.885 ***** 2025-09-02 00:52:42.299421 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-02 00:52:42.299442 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-02 00:52:42.299453 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-02 00:52:42.299495 | orchestrator | 2025-09-02 00:52:42.299506 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-02 00:52:42.299517 | orchestrator | Tuesday 02 September 2025 00:47:01 +0000 (0:00:02.859) 0:00:56.745 ***** 2025-09-02 00:52:42.299528 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.299539 | orchestrator | 2025-09-02 00:52:42.299550 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-02 00:52:42.299561 | orchestrator | Tuesday 02 September 2025 00:47:01 +0000 (0:00:00.759) 0:00:57.504 ***** 2025-09-02 00:52:42.299573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.299584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.299605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.299627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.299639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.299658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.299669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.299681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.299692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.299704 | orchestrator | 2025-09-02 00:52:42.299715 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-02 00:52:42.299726 | orchestrator | Tuesday 02 September 2025 00:47:07 +0000 (0:00:05.586) 0:01:03.091 ***** 2025-09-02 00:52:42.299748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.299766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.299777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.299795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.299820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.299832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.299844 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.299894 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.300019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300079 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.300090 | orchestrator | 2025-09-02 00:52:42.300101 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-02 00:52:42.300112 | orchestrator | Tuesday 02 September 2025 00:47:08 +0000 (0:00:01.065) 0:01:04.156 ***** 2025-09-02 00:52:42.300124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300158 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.300169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300222 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.300234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300268 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.300279 | orchestrator | 2025-09-02 00:52:42.300290 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-02 00:52:42.300348 | orchestrator | Tuesday 02 September 2025 00:47:09 +0000 (0:00:00.831) 0:01:04.988 ***** 2025-09-02 00:52:42.300406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300464 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.300475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300510 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.300521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300641 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.300660 | orchestrator | 2025-09-02 00:52:42.300671 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-02 00:52:42.300682 | orchestrator | Tuesday 02 September 2025 00:47:10 +0000 (0:00:00.915) 0:01:05.903 ***** 2025-09-02 00:52:42.300699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300762 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.300776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300810 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.300835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.300905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.300916 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.300927 | orchestrator | 2025-09-02 00:52:42.300938 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-02 00:52:42.300949 | orchestrator | Tuesday 02 September 2025 00:47:12 +0000 (0:00:02.150) 0:01:08.053 ***** 2025-09-02 00:52:42.300961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.300984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301057 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.301068 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.301079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301113 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.301124 | orchestrator | 2025-09-02 00:52:42.301135 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-02 00:52:42.301146 | orchestrator | Tuesday 02 September 2025 00:47:13 +0000 (0:00:01.042) 0:01:09.095 ***** 2025-09-02 00:52:42.301163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301403 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.301413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301423 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.301433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301483 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.301493 | orchestrator | 2025-09-02 00:52:42.301503 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-02 00:52:42.301513 | orchestrator | Tuesday 02 September 2025 00:47:14 +0000 (0:00:01.620) 0:01:10.716 ***** 2025-09-02 00:52:42.301523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301553 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.301563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301608 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.301623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301654 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.301663 | orchestrator | 2025-09-02 00:52:42.301762 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-02 00:52:42.301801 | orchestrator | Tuesday 02 September 2025 00:47:16 +0000 (0:00:01.331) 0:01:12.047 ***** 2025-09-02 00:52:42.301811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301894 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.301912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.301933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.301943 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.301953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-02 00:52:42.301969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-02 00:52:42.302078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-02 00:52:42.302103 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.302113 | orchestrator | 2025-09-02 00:52:42.302123 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-02 00:52:42.302133 | orchestrator | Tuesday 02 September 2025 00:47:17 +0000 (0:00:01.492) 0:01:13.540 ***** 2025-09-02 00:52:42.302143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-02 00:52:42.302154 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-02 00:52:42.302172 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-02 00:52:42.302182 | orchestrator | 2025-09-02 00:52:42.302192 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-02 00:52:42.302202 | orchestrator | Tuesday 02 September 2025 00:47:20 +0000 (0:00:02.537) 0:01:16.078 ***** 2025-09-02 00:52:42.302212 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-02 00:52:42.302222 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-02 00:52:42.302231 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-02 00:52:42.302241 | orchestrator | 2025-09-02 00:52:42.302256 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-02 00:52:42.302295 | orchestrator | Tuesday 02 September 2025 00:47:22 +0000 (0:00:01.828) 0:01:17.906 ***** 2025-09-02 00:52:42.302322 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-02 00:52:42.302333 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-02 00:52:42.302343 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-02 00:52:42.302352 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-02 00:52:42.302362 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.302372 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-02 00:52:42.302382 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.302392 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-02 00:52:42.302453 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.302464 | orchestrator | 2025-09-02 00:52:42.302474 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-02 00:52:42.302484 | orchestrator | Tuesday 02 September 2025 00:47:23 +0000 (0:00:01.318) 0:01:19.225 ***** 2025-09-02 00:52:42.302495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.302505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.302516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-02 00:52:42.302535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.302551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.302562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-02 00:52:42.302578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.302588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.302598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-02 00:52:42.302608 | orchestrator | 2025-09-02 00:52:42.302618 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-02 00:52:42.302628 | orchestrator | Tuesday 02 September 2025 00:47:26 +0000 (0:00:02.850) 0:01:22.076 ***** 2025-09-02 00:52:42.302638 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.302648 | orchestrator | 2025-09-02 00:52:42.302658 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-02 00:52:42.302668 | orchestrator | Tuesday 02 September 2025 00:47:27 +0000 (0:00:00.824) 0:01:22.900 ***** 2025-09-02 00:52:42.302679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-02 00:52:42.302840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.302854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.302873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.302884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-02 00:52:42.302894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.302904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.302931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-02 00:52:42.302945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.302963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.302973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.302983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.302993 | orchestrator | 2025-09-02 00:52:42.303003 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-02 00:52:42.303013 | orchestrator | Tuesday 02 September 2025 00:47:31 +0000 (0:00:04.676) 0:01:27.577 ***** 2025-09-02 00:52:42.303023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-02 00:52:42.303041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.303063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303084 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.303095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-02 00:52:42.303105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.303115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303135 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.303157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-02 00:52:42.303173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.303224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303245 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.303255 | orchestrator | 2025-09-02 00:52:42.303265 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-02 00:52:42.303275 | orchestrator | Tuesday 02 September 2025 00:47:32 +0000 (0:00:01.099) 0:01:28.677 ***** 2025-09-02 00:52:42.303286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-02 00:52:42.303298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-02 00:52:42.303368 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.303380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-02 00:52:42.303390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-02 00:52:42.303400 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.303409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-02 00:52:42.303426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-02 00:52:42.303437 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.303446 | orchestrator | 2025-09-02 00:52:42.303463 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-02 00:52:42.303474 | orchestrator | Tuesday 02 September 2025 00:47:34 +0000 (0:00:01.673) 0:01:30.350 ***** 2025-09-02 00:52:42.303483 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.303493 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.303503 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.303512 | orchestrator | 2025-09-02 00:52:42.303522 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-02 00:52:42.303532 | orchestrator | Tuesday 02 September 2025 00:47:36 +0000 (0:00:01.394) 0:01:31.745 ***** 2025-09-02 00:52:42.303541 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.303551 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.303561 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.303570 | orchestrator | 2025-09-02 00:52:42.303588 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-02 00:52:42.303598 | orchestrator | Tuesday 02 September 2025 00:47:38 +0000 (0:00:02.078) 0:01:33.824 ***** 2025-09-02 00:52:42.303608 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.303618 | orchestrator | 2025-09-02 00:52:42.303628 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-02 00:52:42.303637 | orchestrator | Tuesday 02 September 2025 00:47:38 +0000 (0:00:00.865) 0:01:34.690 ***** 2025-09-02 00:52:42.303697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.303709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.303760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.303771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303818 | orchestrator | 2025-09-02 00:52:42.303828 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-02 00:52:42.303838 | orchestrator | Tuesday 02 September 2025 00:47:43 +0000 (0:00:04.790) 0:01:39.481 ***** 2025-09-02 00:52:42.303855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.303870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303891 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.303902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.303912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.303965 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.303975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.303995 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.304005 | orchestrator | 2025-09-02 00:52:42.304015 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-02 00:52:42.304025 | orchestrator | Tuesday 02 September 2025 00:47:44 +0000 (0:00:00.593) 0:01:40.075 ***** 2025-09-02 00:52:42.304035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-02 00:52:42.304053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-02 00:52:42.304064 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.304073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-02 00:52:42.304083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-02 00:52:42.304093 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.304114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-02 00:52:42.304125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-02 00:52:42.304135 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.304145 | orchestrator | 2025-09-02 00:52:42.304178 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-02 00:52:42.304189 | orchestrator | Tuesday 02 September 2025 00:47:45 +0000 (0:00:01.026) 0:01:41.102 ***** 2025-09-02 00:52:42.304199 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.304209 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.304219 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.304228 | orchestrator | 2025-09-02 00:52:42.304238 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-02 00:52:42.304247 | orchestrator | Tuesday 02 September 2025 00:47:46 +0000 (0:00:01.447) 0:01:42.550 ***** 2025-09-02 00:52:42.304257 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.304267 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.304277 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.304286 | orchestrator | 2025-09-02 00:52:42.304302 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-02 00:52:42.304327 | orchestrator | Tuesday 02 September 2025 00:47:48 +0000 (0:00:02.029) 0:01:44.579 ***** 2025-09-02 00:52:42.304337 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.304347 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.304356 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.304366 | orchestrator | 2025-09-02 00:52:42.304376 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-02 00:52:42.304504 | orchestrator | Tuesday 02 September 2025 00:47:49 +0000 (0:00:00.339) 0:01:44.919 ***** 2025-09-02 00:52:42.304516 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.304526 | orchestrator | 2025-09-02 00:52:42.304541 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-02 00:52:42.304551 | orchestrator | Tuesday 02 September 2025 00:47:49 +0000 (0:00:00.814) 0:01:45.734 ***** 2025-09-02 00:52:42.304562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-02 00:52:42.304581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-02 00:52:42.304592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-02 00:52:42.304602 | orchestrator | 2025-09-02 00:52:42.304612 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-02 00:52:42.304622 | orchestrator | Tuesday 02 September 2025 00:47:52 +0000 (0:00:02.644) 0:01:48.379 ***** 2025-09-02 00:52:42.304638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-02 00:52:42.304649 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.304664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-02 00:52:42.304674 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.304685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-02 00:52:42.304701 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.304711 | orchestrator | 2025-09-02 00:52:42.304721 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-02 00:52:42.304730 | orchestrator | Tuesday 02 September 2025 00:47:54 +0000 (0:00:01.540) 0:01:49.919 ***** 2025-09-02 00:52:42.304742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-02 00:52:42.304754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-02 00:52:42.304765 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.304775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-02 00:52:42.304785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-02 00:52:42.304796 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.304811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-02 00:52:42.304822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-02 00:52:42.304837 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.304846 | orchestrator | 2025-09-02 00:52:42.304863 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-02 00:52:42.304873 | orchestrator | Tuesday 02 September 2025 00:47:56 +0000 (0:00:01.868) 0:01:51.788 ***** 2025-09-02 00:52:42.304883 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.304893 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.304903 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.304912 | orchestrator | 2025-09-02 00:52:42.304922 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-02 00:52:42.304932 | orchestrator | Tuesday 02 September 2025 00:47:56 +0000 (0:00:00.720) 0:01:52.509 ***** 2025-09-02 00:52:42.304942 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.304951 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.304961 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.304971 | orchestrator | 2025-09-02 00:52:42.304981 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-02 00:52:42.304990 | orchestrator | Tuesday 02 September 2025 00:47:57 +0000 (0:00:01.188) 0:01:53.698 ***** 2025-09-02 00:52:42.305000 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.305010 | orchestrator | 2025-09-02 00:52:42.305020 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-02 00:52:42.305030 | orchestrator | Tuesday 02 September 2025 00:47:58 +0000 (0:00:00.718) 0:01:54.417 ***** 2025-09-02 00:52:42.305040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.305051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.305113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.305149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305238 | orchestrator | 2025-09-02 00:52:42.305248 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-02 00:52:42.305258 | orchestrator | Tuesday 02 September 2025 00:48:02 +0000 (0:00:04.145) 0:01:58.562 ***** 2025-09-02 00:52:42.305268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.305279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305381 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.305392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.305402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305447 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.305462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.305473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.305509 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.305519 | orchestrator | 2025-09-02 00:52:42.305529 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-02 00:52:42.305539 | orchestrator | Tuesday 02 September 2025 00:48:04 +0000 (0:00:01.515) 0:02:00.078 ***** 2025-09-02 00:52:42.305549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-02 00:52:42.305565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-02 00:52:42.305575 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.305585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-02 00:52:42.305598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-02 00:52:42.305606 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.305614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-02 00:52:42.305622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-02 00:52:42.305630 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.305638 | orchestrator | 2025-09-02 00:52:42.305646 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-02 00:52:42.305654 | orchestrator | Tuesday 02 September 2025 00:48:05 +0000 (0:00:01.327) 0:02:01.406 ***** 2025-09-02 00:52:42.305662 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.305670 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.305677 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.305685 | orchestrator | 2025-09-02 00:52:42.305693 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-02 00:52:42.305701 | orchestrator | Tuesday 02 September 2025 00:48:07 +0000 (0:00:01.518) 0:02:02.924 ***** 2025-09-02 00:52:42.305709 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.305717 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.305725 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.305733 | orchestrator | 2025-09-02 00:52:42.305741 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-02 00:52:42.305749 | orchestrator | Tuesday 02 September 2025 00:48:09 +0000 (0:00:02.424) 0:02:05.349 ***** 2025-09-02 00:52:42.305757 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.305765 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.305773 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.305780 | orchestrator | 2025-09-02 00:52:42.305788 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-02 00:52:42.305796 | orchestrator | Tuesday 02 September 2025 00:48:10 +0000 (0:00:00.536) 0:02:05.885 ***** 2025-09-02 00:52:42.305804 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.305812 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.305820 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.305828 | orchestrator | 2025-09-02 00:52:42.305843 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-02 00:52:42.305851 | orchestrator | Tuesday 02 September 2025 00:48:10 +0000 (0:00:00.309) 0:02:06.195 ***** 2025-09-02 00:52:42.305859 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.305867 | orchestrator | 2025-09-02 00:52:42.305875 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-02 00:52:42.305883 | orchestrator | Tuesday 02 September 2025 00:48:11 +0000 (0:00:00.785) 0:02:06.981 ***** 2025-09-02 00:52:42.305891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 00:52:42.307095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 00:52:42.307136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 00:52:42.307173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 00:52:42.307203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 00:52:42.307382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 00:52:42.307398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307590 | orchestrator | 2025-09-02 00:52:42.307598 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-02 00:52:42.307607 | orchestrator | Tuesday 02 September 2025 00:48:15 +0000 (0:00:04.375) 0:02:11.356 ***** 2025-09-02 00:52:42.307627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 00:52:42.307636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 00:52:42.307650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307698 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.307710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 00:52:42.307724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 00:52:42.307733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 00:52:42.307741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 00:52:42.307763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307844 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.307856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.307869 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.307877 | orchestrator | 2025-09-02 00:52:42.307885 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-02 00:52:42.307893 | orchestrator | Tuesday 02 September 2025 00:48:16 +0000 (0:00:00.823) 0:02:12.180 ***** 2025-09-02 00:52:42.307902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-02 00:52:42.307910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-02 00:52:42.307919 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.307927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-02 00:52:42.307935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-02 00:52:42.307943 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.307951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-02 00:52:42.307959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-02 00:52:42.307967 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.307975 | orchestrator | 2025-09-02 00:52:42.307983 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-02 00:52:42.307991 | orchestrator | Tuesday 02 September 2025 00:48:17 +0000 (0:00:00.997) 0:02:13.178 ***** 2025-09-02 00:52:42.307999 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.308006 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.308014 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.308022 | orchestrator | 2025-09-02 00:52:42.308030 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-02 00:52:42.308038 | orchestrator | Tuesday 02 September 2025 00:48:19 +0000 (0:00:01.773) 0:02:14.951 ***** 2025-09-02 00:52:42.308046 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.308054 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.308062 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.308069 | orchestrator | 2025-09-02 00:52:42.308077 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-02 00:52:42.308085 | orchestrator | Tuesday 02 September 2025 00:48:21 +0000 (0:00:01.906) 0:02:16.857 ***** 2025-09-02 00:52:42.308093 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.308125 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.308134 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.308142 | orchestrator | 2025-09-02 00:52:42.308150 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-02 00:52:42.308158 | orchestrator | Tuesday 02 September 2025 00:48:21 +0000 (0:00:00.525) 0:02:17.383 ***** 2025-09-02 00:52:42.308166 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.308174 | orchestrator | 2025-09-02 00:52:42.308182 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-02 00:52:42.308195 | orchestrator | Tuesday 02 September 2025 00:48:22 +0000 (0:00:00.827) 0:02:18.210 ***** 2025-09-02 00:52:42.308216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 00:52:42.308227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.308264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 00:52:42.308283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.308298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 00:52:42.308370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.308381 | orchestrator | 2025-09-02 00:52:42.308389 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-02 00:52:42.308397 | orchestrator | Tuesday 02 September 2025 00:48:26 +0000 (0:00:04.140) 0:02:22.350 ***** 2025-09-02 00:52:42.308411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-02 00:52:42.308432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.308441 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.308450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-02 00:52:42.308475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.308484 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.308493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-02 00:52:42.308516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.308525 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.308533 | orchestrator | 2025-09-02 00:52:42.308541 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-02 00:52:42.308549 | orchestrator | Tuesday 02 September 2025 00:48:29 +0000 (0:00:03.101) 0:02:25.452 ***** 2025-09-02 00:52:42.308558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-02 00:52:42.308566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-02 00:52:42.308575 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.308583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-02 00:52:42.308602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-02 00:52:42.308616 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.308624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-02 00:52:42.308639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-02 00:52:42.308647 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.308655 | orchestrator | 2025-09-02 00:52:42.308663 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-02 00:52:42.308671 | orchestrator | Tuesday 02 September 2025 00:48:32 +0000 (0:00:03.206) 0:02:28.658 ***** 2025-09-02 00:52:42.308683 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.308691 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.308699 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.308706 | orchestrator | 2025-09-02 00:52:42.308714 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-02 00:52:42.308722 | orchestrator | Tuesday 02 September 2025 00:48:34 +0000 (0:00:01.337) 0:02:29.996 ***** 2025-09-02 00:52:42.308730 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.308737 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.308745 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.308753 | orchestrator | 2025-09-02 00:52:42.308761 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-02 00:52:42.308769 | orchestrator | Tuesday 02 September 2025 00:48:36 +0000 (0:00:02.307) 0:02:32.304 ***** 2025-09-02 00:52:42.308777 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.308785 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.308793 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.308801 | orchestrator | 2025-09-02 00:52:42.308808 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-02 00:52:42.308816 | orchestrator | Tuesday 02 September 2025 00:48:37 +0000 (0:00:00.543) 0:02:32.847 ***** 2025-09-02 00:52:42.308824 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.308832 | orchestrator | 2025-09-02 00:52:42.308840 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-02 00:52:42.308848 | orchestrator | Tuesday 02 September 2025 00:48:37 +0000 (0:00:00.821) 0:02:33.669 ***** 2025-09-02 00:52:42.308857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 00:52:42.308870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 00:52:42.308879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 00:52:42.308886 | orchestrator | 2025-09-02 00:52:42.308893 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-02 00:52:42.308899 | orchestrator | Tuesday 02 September 2025 00:48:41 +0000 (0:00:03.327) 0:02:36.996 ***** 2025-09-02 00:52:42.308910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-02 00:52:42.308921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-02 00:52:42.308928 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.308935 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.308942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-02 00:52:42.308953 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.308960 | orchestrator | 2025-09-02 00:52:42.308966 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-02 00:52:42.308973 | orchestrator | Tuesday 02 September 2025 00:48:41 +0000 (0:00:00.615) 0:02:37.612 ***** 2025-09-02 00:52:42.308980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-02 00:52:42.308987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-02 00:52:42.308994 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.309001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-02 00:52:42.309007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-02 00:52:42.309014 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.309021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-02 00:52:42.309028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-02 00:52:42.309035 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.309042 | orchestrator | 2025-09-02 00:52:42.309048 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-02 00:52:42.309055 | orchestrator | Tuesday 02 September 2025 00:48:42 +0000 (0:00:00.651) 0:02:38.264 ***** 2025-09-02 00:52:42.309062 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.309068 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.309075 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.309082 | orchestrator | 2025-09-02 00:52:42.309088 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-02 00:52:42.309095 | orchestrator | Tuesday 02 September 2025 00:48:43 +0000 (0:00:01.349) 0:02:39.613 ***** 2025-09-02 00:52:42.309102 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.309109 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.309115 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.309122 | orchestrator | 2025-09-02 00:52:42.309129 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-02 00:52:42.309135 | orchestrator | Tuesday 02 September 2025 00:48:46 +0000 (0:00:02.172) 0:02:41.785 ***** 2025-09-02 00:52:42.309142 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.309149 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.309159 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.309166 | orchestrator | 2025-09-02 00:52:42.309173 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-02 00:52:42.309180 | orchestrator | Tuesday 02 September 2025 00:48:46 +0000 (0:00:00.533) 0:02:42.319 ***** 2025-09-02 00:52:42.309186 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.309193 | orchestrator | 2025-09-02 00:52:42.309200 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-02 00:52:42.309207 | orchestrator | Tuesday 02 September 2025 00:48:47 +0000 (0:00:00.929) 0:02:43.248 ***** 2025-09-02 00:52:42.309217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:52:42.309902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:52:42.310004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:52:42.310040 | orchestrator | 2025-09-02 00:52:42.310049 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-02 00:52:42.310056 | orchestrator | Tuesday 02 September 2025 00:48:51 +0000 (0:00:03.986) 0:02:47.235 ***** 2025-09-02 00:52:42.310111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-02 00:52:42.310127 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.310135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-02 00:52:42.310143 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.310192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-02 00:52:42.310213 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.310219 | orchestrator | 2025-09-02 00:52:42.310226 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-02 00:52:42.310233 | orchestrator | Tuesday 02 September 2025 00:48:52 +0000 (0:00:01.237) 0:02:48.472 ***** 2025-09-02 00:52:42.310240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-02 00:52:42.310248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-02 00:52:42.310256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-02 00:52:42.310263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-02 00:52:42.310271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-02 00:52:42.310278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-02 00:52:42.310285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-02 00:52:42.310292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-02 00:52:42.310299 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.310667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-02 00:52:42.310687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-02 00:52:42.310694 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.310707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-02 00:52:42.310715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-02 00:52:42.310721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-02 00:52:42.310728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-02 00:52:42.310735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-02 00:52:42.310741 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.310747 | orchestrator | 2025-09-02 00:52:42.310754 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-02 00:52:42.310760 | orchestrator | Tuesday 02 September 2025 00:48:53 +0000 (0:00:01.070) 0:02:49.542 ***** 2025-09-02 00:52:42.310766 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.310773 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.310779 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.310785 | orchestrator | 2025-09-02 00:52:42.310792 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-02 00:52:42.310798 | orchestrator | Tuesday 02 September 2025 00:48:55 +0000 (0:00:01.366) 0:02:50.909 ***** 2025-09-02 00:52:42.310804 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.310810 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.310817 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.310823 | orchestrator | 2025-09-02 00:52:42.310829 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-02 00:52:42.310836 | orchestrator | Tuesday 02 September 2025 00:48:57 +0000 (0:00:02.061) 0:02:52.971 ***** 2025-09-02 00:52:42.310842 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.310848 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.310854 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.310861 | orchestrator | 2025-09-02 00:52:42.310867 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-02 00:52:42.310873 | orchestrator | Tuesday 02 September 2025 00:48:57 +0000 (0:00:00.310) 0:02:53.281 ***** 2025-09-02 00:52:42.310880 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.310886 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.310900 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.310907 | orchestrator | 2025-09-02 00:52:42.310913 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-02 00:52:42.310919 | orchestrator | Tuesday 02 September 2025 00:48:58 +0000 (0:00:00.511) 0:02:53.792 ***** 2025-09-02 00:52:42.310925 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.310932 | orchestrator | 2025-09-02 00:52:42.310938 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-02 00:52:42.310945 | orchestrator | Tuesday 02 September 2025 00:48:59 +0000 (0:00:00.981) 0:02:54.774 ***** 2025-09-02 00:52:42.311005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:52:42.311021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:52:42.311028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:52:42.311035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:52:42.311043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:52:42.311085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:52:42.311348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:52:42.311368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:52:42.311375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:52:42.311382 | orchestrator | 2025-09-02 00:52:42.311388 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-02 00:52:42.311395 | orchestrator | Tuesday 02 September 2025 00:49:02 +0000 (0:00:03.350) 0:02:58.125 ***** 2025-09-02 00:52:42.311403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-02 00:52:42.311418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:52:42.311496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:52:42.311507 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.311520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-02 00:52:42.311527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:52:42.311534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:52:42.311546 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.311553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-02 00:52:42.312106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:52:42.312129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:52:42.312137 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.312143 | orchestrator | 2025-09-02 00:52:42.312150 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-02 00:52:42.312156 | orchestrator | Tuesday 02 September 2025 00:49:03 +0000 (0:00:00.906) 0:02:59.031 ***** 2025-09-02 00:52:42.312163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-02 00:52:42.312171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-02 00:52:42.312178 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.312185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-02 00:52:42.312191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-02 00:52:42.312207 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.312214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-02 00:52:42.312221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-02 00:52:42.312227 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.312233 | orchestrator | 2025-09-02 00:52:42.312239 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-02 00:52:42.312245 | orchestrator | Tuesday 02 September 2025 00:49:04 +0000 (0:00:00.836) 0:02:59.868 ***** 2025-09-02 00:52:42.312252 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.312258 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.312264 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.312270 | orchestrator | 2025-09-02 00:52:42.312276 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-02 00:52:42.312283 | orchestrator | Tuesday 02 September 2025 00:49:05 +0000 (0:00:01.351) 0:03:01.219 ***** 2025-09-02 00:52:42.312289 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.312295 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.312301 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.312329 | orchestrator | 2025-09-02 00:52:42.312337 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-02 00:52:42.312345 | orchestrator | Tuesday 02 September 2025 00:49:07 +0000 (0:00:02.178) 0:03:03.398 ***** 2025-09-02 00:52:42.312352 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.312360 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.312367 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.312374 | orchestrator | 2025-09-02 00:52:42.312381 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-02 00:52:42.312388 | orchestrator | Tuesday 02 September 2025 00:49:08 +0000 (0:00:00.536) 0:03:03.935 ***** 2025-09-02 00:52:42.312395 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.312402 | orchestrator | 2025-09-02 00:52:42.312410 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-02 00:52:42.312417 | orchestrator | Tuesday 02 September 2025 00:49:09 +0000 (0:00:01.169) 0:03:05.104 ***** 2025-09-02 00:52:42.312503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 00:52:42.312515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.312528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 00:52:42.312536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.312577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 00:52:42.312633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.312642 | orchestrator | 2025-09-02 00:52:42.312649 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-02 00:52:42.312657 | orchestrator | Tuesday 02 September 2025 00:49:13 +0000 (0:00:03.680) 0:03:08.785 ***** 2025-09-02 00:52:42.312670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 00:52:42.312678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.312684 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.312691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 00:52:42.312729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.312737 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.312747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 00:52:42.312757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.312764 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.312770 | orchestrator | 2025-09-02 00:52:42.312776 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-02 00:52:42.312783 | orchestrator | Tuesday 02 September 2025 00:49:14 +0000 (0:00:01.045) 0:03:09.831 ***** 2025-09-02 00:52:42.312789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-02 00:52:42.312796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-02 00:52:42.312802 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.312809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-02 00:52:42.312815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-02 00:52:42.312821 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.312827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-02 00:52:42.312834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-02 00:52:42.312840 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.312846 | orchestrator | 2025-09-02 00:52:42.312852 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-02 00:52:42.312858 | orchestrator | Tuesday 02 September 2025 00:49:14 +0000 (0:00:00.893) 0:03:10.724 ***** 2025-09-02 00:52:42.312865 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.312871 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.312877 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.312883 | orchestrator | 2025-09-02 00:52:42.312889 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-02 00:52:42.312895 | orchestrator | Tuesday 02 September 2025 00:49:16 +0000 (0:00:01.297) 0:03:12.022 ***** 2025-09-02 00:52:42.312901 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.312908 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.312914 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.312920 | orchestrator | 2025-09-02 00:52:42.312926 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-02 00:52:42.312932 | orchestrator | Tuesday 02 September 2025 00:49:18 +0000 (0:00:02.232) 0:03:14.255 ***** 2025-09-02 00:52:42.312989 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.312998 | orchestrator | 2025-09-02 00:52:42.313004 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-02 00:52:42.313010 | orchestrator | Tuesday 02 September 2025 00:49:19 +0000 (0:00:01.365) 0:03:15.620 ***** 2025-09-02 00:52:42.313020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-02 00:52:42.313043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-02 00:52:42.313121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-02 00:52:42.313154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313224 | orchestrator | 2025-09-02 00:52:42.313231 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-02 00:52:42.313237 | orchestrator | Tuesday 02 September 2025 00:49:24 +0000 (0:00:04.651) 0:03:20.272 ***** 2025-09-02 00:52:42.313247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-02 00:52:42.313255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313274 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.313281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-02 00:52:42.313394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-02 00:52:42.313408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.313508 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.313515 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.313521 | orchestrator | 2025-09-02 00:52:42.313528 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-02 00:52:42.313534 | orchestrator | Tuesday 02 September 2025 00:49:25 +0000 (0:00:00.922) 0:03:21.195 ***** 2025-09-02 00:52:42.313540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-02 00:52:42.313553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-02 00:52:42.313560 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.313566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-02 00:52:42.313572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-02 00:52:42.313578 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.313585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-02 00:52:42.313591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-02 00:52:42.313597 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.313603 | orchestrator | 2025-09-02 00:52:42.313609 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-02 00:52:42.313615 | orchestrator | Tuesday 02 September 2025 00:49:26 +0000 (0:00:01.236) 0:03:22.431 ***** 2025-09-02 00:52:42.313621 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.313627 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.313634 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.313640 | orchestrator | 2025-09-02 00:52:42.313646 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-02 00:52:42.313652 | orchestrator | Tuesday 02 September 2025 00:49:28 +0000 (0:00:01.392) 0:03:23.824 ***** 2025-09-02 00:52:42.313658 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.313664 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.313671 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.313677 | orchestrator | 2025-09-02 00:52:42.313687 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-02 00:52:42.313693 | orchestrator | Tuesday 02 September 2025 00:49:30 +0000 (0:00:02.185) 0:03:26.010 ***** 2025-09-02 00:52:42.313699 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.313705 | orchestrator | 2025-09-02 00:52:42.313712 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-02 00:52:42.313718 | orchestrator | Tuesday 02 September 2025 00:49:31 +0000 (0:00:01.353) 0:03:27.363 ***** 2025-09-02 00:52:42.313725 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-02 00:52:42.313731 | orchestrator | 2025-09-02 00:52:42.313737 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-02 00:52:42.313743 | orchestrator | Tuesday 02 September 2025 00:49:34 +0000 (0:00:02.884) 0:03:30.248 ***** 2025-09-02 00:52:42.313796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:52:42.313821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-02 00:52:42.313829 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.313835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:52:42.313845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-02 00:52:42.313850 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.313901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:52:42.313910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-02 00:52:42.313919 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.313925 | orchestrator | 2025-09-02 00:52:42.313931 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-02 00:52:42.313936 | orchestrator | Tuesday 02 September 2025 00:49:37 +0000 (0:00:02.554) 0:03:32.802 ***** 2025-09-02 00:52:42.313942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:52:42.313986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-02 00:52:42.313994 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.314002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:52:42.314044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-02 00:52:42.314052 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.314102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:52:42.314111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-02 00:52:42.314121 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.314126 | orchestrator | 2025-09-02 00:52:42.314132 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-02 00:52:42.314137 | orchestrator | Tuesday 02 September 2025 00:49:39 +0000 (0:00:02.510) 0:03:35.312 ***** 2025-09-02 00:52:42.314143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-02 00:52:42.314149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-02 00:52:42.314154 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.314160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-02 00:52:42.314166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-02 00:52:42.314172 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.314225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-02 00:52:42.314237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-02 00:52:42.314247 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.314253 | orchestrator | 2025-09-02 00:52:42.314258 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-02 00:52:42.314264 | orchestrator | Tuesday 02 September 2025 00:49:42 +0000 (0:00:02.863) 0:03:38.176 ***** 2025-09-02 00:52:42.314269 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.314275 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.314280 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.314286 | orchestrator | 2025-09-02 00:52:42.314291 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-02 00:52:42.314297 | orchestrator | Tuesday 02 September 2025 00:49:44 +0000 (0:00:01.858) 0:03:40.034 ***** 2025-09-02 00:52:42.314302 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.314323 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.314328 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.314333 | orchestrator | 2025-09-02 00:52:42.314339 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-02 00:52:42.314344 | orchestrator | Tuesday 02 September 2025 00:49:45 +0000 (0:00:01.536) 0:03:41.571 ***** 2025-09-02 00:52:42.314350 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.314355 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.314361 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.314366 | orchestrator | 2025-09-02 00:52:42.314371 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-02 00:52:42.314377 | orchestrator | Tuesday 02 September 2025 00:49:46 +0000 (0:00:00.326) 0:03:41.898 ***** 2025-09-02 00:52:42.314382 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.314388 | orchestrator | 2025-09-02 00:52:42.314393 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-02 00:52:42.314398 | orchestrator | Tuesday 02 September 2025 00:49:47 +0000 (0:00:01.359) 0:03:43.257 ***** 2025-09-02 00:52:42.314404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-02 00:52:42.314411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-02 00:52:42.314468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-02 00:52:42.314483 | orchestrator | 2025-09-02 00:52:42.314491 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-02 00:52:42.314497 | orchestrator | Tuesday 02 September 2025 00:49:49 +0000 (0:00:01.603) 0:03:44.860 ***** 2025-09-02 00:52:42.314502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-02 00:52:42.314508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-02 00:52:42.314514 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.314519 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.314525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-02 00:52:42.314531 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.314536 | orchestrator | 2025-09-02 00:52:42.314542 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-02 00:52:42.314547 | orchestrator | Tuesday 02 September 2025 00:49:49 +0000 (0:00:00.413) 0:03:45.273 ***** 2025-09-02 00:52:42.314553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-02 00:52:42.314559 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.314565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-02 00:52:42.314574 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.314619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-02 00:52:42.314627 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.314633 | orchestrator | 2025-09-02 00:52:42.314638 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-02 00:52:42.314644 | orchestrator | Tuesday 02 September 2025 00:49:50 +0000 (0:00:00.967) 0:03:46.241 ***** 2025-09-02 00:52:42.314649 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.314654 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.314660 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.314665 | orchestrator | 2025-09-02 00:52:42.314671 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-02 00:52:42.314676 | orchestrator | Tuesday 02 September 2025 00:49:50 +0000 (0:00:00.487) 0:03:46.728 ***** 2025-09-02 00:52:42.314687 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.314692 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.314698 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.314703 | orchestrator | 2025-09-02 00:52:42.314708 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-02 00:52:42.314714 | orchestrator | Tuesday 02 September 2025 00:49:52 +0000 (0:00:01.273) 0:03:48.002 ***** 2025-09-02 00:52:42.314719 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.314725 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.314730 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.314735 | orchestrator | 2025-09-02 00:52:42.314741 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-02 00:52:42.314746 | orchestrator | Tuesday 02 September 2025 00:49:52 +0000 (0:00:00.310) 0:03:48.312 ***** 2025-09-02 00:52:42.314752 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.314757 | orchestrator | 2025-09-02 00:52:42.314763 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-02 00:52:42.314768 | orchestrator | Tuesday 02 September 2025 00:49:53 +0000 (0:00:01.398) 0:03:49.711 ***** 2025-09-02 00:52:42.314774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 00:52:42.314780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.314790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.314838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.314848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-02 00:52:42.314855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.314860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.314877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 00:52:42.314888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.314938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.314949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.314955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.314961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.314967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.314977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-02 00:52:42.315025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 00:52:42.315093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.315102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.315107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.315232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-02 00:52:42.315243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.315356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.315425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.315431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.315511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.315517 | orchestrator | 2025-09-02 00:52:42.315523 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-02 00:52:42.315529 | orchestrator | Tuesday 02 September 2025 00:49:58 +0000 (0:00:04.182) 0:03:53.894 ***** 2025-09-02 00:52:42.315535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 00:52:42.315547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-02 00:52:42.315627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 00:52:42.315644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.315736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-02 00:52:42.315790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 00:52:42.315799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.315896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-02 00:52:42.315976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.315987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.315992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.315999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.316004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316010 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.316067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.316079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-02 00:52:42.316089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.316095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.316100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.316139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.316149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.316160 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.316166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-02 00:52:42.316172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-02 00:52:42.316187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-02 00:52:42.316221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-02 00:52:42.316227 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.316233 | orchestrator | 2025-09-02 00:52:42.316239 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-02 00:52:42.316245 | orchestrator | Tuesday 02 September 2025 00:49:59 +0000 (0:00:01.462) 0:03:55.356 ***** 2025-09-02 00:52:42.316250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-02 00:52:42.316256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-02 00:52:42.316262 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.316267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-02 00:52:42.316273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-02 00:52:42.316278 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.316284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-02 00:52:42.316289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-02 00:52:42.316295 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.316300 | orchestrator | 2025-09-02 00:52:42.316319 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-02 00:52:42.316325 | orchestrator | Tuesday 02 September 2025 00:50:01 +0000 (0:00:02.086) 0:03:57.443 ***** 2025-09-02 00:52:42.316331 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.316336 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.316341 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.316347 | orchestrator | 2025-09-02 00:52:42.316352 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-02 00:52:42.316358 | orchestrator | Tuesday 02 September 2025 00:50:03 +0000 (0:00:01.298) 0:03:58.741 ***** 2025-09-02 00:52:42.316371 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.316377 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.316383 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.316388 | orchestrator | 2025-09-02 00:52:42.316393 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-02 00:52:42.316399 | orchestrator | Tuesday 02 September 2025 00:50:05 +0000 (0:00:02.024) 0:04:00.766 ***** 2025-09-02 00:52:42.316408 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.316413 | orchestrator | 2025-09-02 00:52:42.316419 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-02 00:52:42.316424 | orchestrator | Tuesday 02 September 2025 00:50:06 +0000 (0:00:01.242) 0:04:02.009 ***** 2025-09-02 00:52:42.316449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.316459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.316465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.316471 | orchestrator | 2025-09-02 00:52:42.316476 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-02 00:52:42.316482 | orchestrator | Tuesday 02 September 2025 00:50:09 +0000 (0:00:03.485) 0:04:05.494 ***** 2025-09-02 00:52:42.316488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.316496 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.316518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.316525 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.316533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.316539 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.316544 | orchestrator | 2025-09-02 00:52:42.316550 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-02 00:52:42.316555 | orchestrator | Tuesday 02 September 2025 00:50:10 +0000 (0:00:00.558) 0:04:06.053 ***** 2025-09-02 00:52:42.316561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-02 00:52:42.316567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-02 00:52:42.316573 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.316578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-02 00:52:42.316584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-02 00:52:42.316589 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.316595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-02 00:52:42.316600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-02 00:52:42.316610 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.316615 | orchestrator | 2025-09-02 00:52:42.316621 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-02 00:52:42.316626 | orchestrator | Tuesday 02 September 2025 00:50:11 +0000 (0:00:00.780) 0:04:06.834 ***** 2025-09-02 00:52:42.316631 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.316637 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.316642 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.316648 | orchestrator | 2025-09-02 00:52:42.316653 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-02 00:52:42.316659 | orchestrator | Tuesday 02 September 2025 00:50:12 +0000 (0:00:01.234) 0:04:08.068 ***** 2025-09-02 00:52:42.316664 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.316670 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.316675 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.316680 | orchestrator | 2025-09-02 00:52:42.316686 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-02 00:52:42.316692 | orchestrator | Tuesday 02 September 2025 00:50:14 +0000 (0:00:02.107) 0:04:10.176 ***** 2025-09-02 00:52:42.316699 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.316705 | orchestrator | 2025-09-02 00:52:42.316712 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-02 00:52:42.316718 | orchestrator | Tuesday 02 September 2025 00:50:15 +0000 (0:00:01.541) 0:04:11.718 ***** 2025-09-02 00:52:42.316745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.316754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.316778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.316818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316835 | orchestrator | 2025-09-02 00:52:42.316840 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-02 00:52:42.316846 | orchestrator | Tuesday 02 September 2025 00:50:20 +0000 (0:00:04.412) 0:04:16.131 ***** 2025-09-02 00:52:42.316867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.316877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316889 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.316895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.316904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316916 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.316941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.316949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.316963 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.316969 | orchestrator | 2025-09-02 00:52:42.316975 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-02 00:52:42.316980 | orchestrator | Tuesday 02 September 2025 00:50:21 +0000 (0:00:01.287) 0:04:17.418 ***** 2025-09-02 00:52:42.316986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-02 00:52:42.316992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-02 00:52:42.316998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317009 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.317015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317052 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.317059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-02 00:52:42.317087 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.317093 | orchestrator | 2025-09-02 00:52:42.317098 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-02 00:52:42.317104 | orchestrator | Tuesday 02 September 2025 00:50:22 +0000 (0:00:00.906) 0:04:18.325 ***** 2025-09-02 00:52:42.317109 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.317115 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.317120 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.317125 | orchestrator | 2025-09-02 00:52:42.317131 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-02 00:52:42.317136 | orchestrator | Tuesday 02 September 2025 00:50:23 +0000 (0:00:01.395) 0:04:19.720 ***** 2025-09-02 00:52:42.317142 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.317147 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.317152 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.317158 | orchestrator | 2025-09-02 00:52:42.317163 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-02 00:52:42.317169 | orchestrator | Tuesday 02 September 2025 00:50:26 +0000 (0:00:02.234) 0:04:21.955 ***** 2025-09-02 00:52:42.317174 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.317179 | orchestrator | 2025-09-02 00:52:42.317185 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-02 00:52:42.317190 | orchestrator | Tuesday 02 September 2025 00:50:27 +0000 (0:00:01.541) 0:04:23.497 ***** 2025-09-02 00:52:42.317196 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-02 00:52:42.317202 | orchestrator | 2025-09-02 00:52:42.317207 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-02 00:52:42.317212 | orchestrator | Tuesday 02 September 2025 00:50:28 +0000 (0:00:00.828) 0:04:24.325 ***** 2025-09-02 00:52:42.317218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-02 00:52:42.317224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-02 00:52:42.317230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-02 00:52:42.317235 | orchestrator | 2025-09-02 00:52:42.317241 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-02 00:52:42.317246 | orchestrator | Tuesday 02 September 2025 00:50:32 +0000 (0:00:04.366) 0:04:28.691 ***** 2025-09-02 00:52:42.317267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-02 00:52:42.317277 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.317285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-02 00:52:42.317290 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.317296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-02 00:52:42.317302 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.317348 | orchestrator | 2025-09-02 00:52:42.317354 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-02 00:52:42.317359 | orchestrator | Tuesday 02 September 2025 00:50:34 +0000 (0:00:01.572) 0:04:30.264 ***** 2025-09-02 00:52:42.317365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-02 00:52:42.317371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-02 00:52:42.317377 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.317382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-02 00:52:42.317388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-02 00:52:42.317394 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.317399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-02 00:52:42.317405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-02 00:52:42.317411 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.317416 | orchestrator | 2025-09-02 00:52:42.317422 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-02 00:52:42.317427 | orchestrator | Tuesday 02 September 2025 00:50:36 +0000 (0:00:01.721) 0:04:31.985 ***** 2025-09-02 00:52:42.317436 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.317442 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.317447 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.317453 | orchestrator | 2025-09-02 00:52:42.317458 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-02 00:52:42.317464 | orchestrator | Tuesday 02 September 2025 00:50:38 +0000 (0:00:02.570) 0:04:34.555 ***** 2025-09-02 00:52:42.317469 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.317474 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.317480 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.317485 | orchestrator | 2025-09-02 00:52:42.317491 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-02 00:52:42.317496 | orchestrator | Tuesday 02 September 2025 00:50:41 +0000 (0:00:03.083) 0:04:37.639 ***** 2025-09-02 00:52:42.317518 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-02 00:52:42.317525 | orchestrator | 2025-09-02 00:52:42.317530 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-02 00:52:42.317536 | orchestrator | Tuesday 02 September 2025 00:50:43 +0000 (0:00:01.437) 0:04:39.076 ***** 2025-09-02 00:52:42.317544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-02 00:52:42.317550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-02 00:52:42.317556 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.317561 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.317567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-02 00:52:42.317573 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.317578 | orchestrator | 2025-09-02 00:52:42.317584 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-02 00:52:42.317589 | orchestrator | Tuesday 02 September 2025 00:50:44 +0000 (0:00:01.285) 0:04:40.362 ***** 2025-09-02 00:52:42.317595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-02 00:52:42.317600 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.317609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-02 00:52:42.317615 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.317621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-02 00:52:42.317626 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.317632 | orchestrator | 2025-09-02 00:52:42.317637 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-02 00:52:42.317643 | orchestrator | Tuesday 02 September 2025 00:50:45 +0000 (0:00:01.310) 0:04:41.672 ***** 2025-09-02 00:52:42.317648 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.317653 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.317659 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.317664 | orchestrator | 2025-09-02 00:52:42.317684 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-02 00:52:42.317691 | orchestrator | Tuesday 02 September 2025 00:50:47 +0000 (0:00:01.927) 0:04:43.600 ***** 2025-09-02 00:52:42.317696 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.317702 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.317708 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.317713 | orchestrator | 2025-09-02 00:52:42.317718 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-02 00:52:42.317724 | orchestrator | Tuesday 02 September 2025 00:50:50 +0000 (0:00:02.548) 0:04:46.148 ***** 2025-09-02 00:52:42.317729 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.317735 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.317740 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.317745 | orchestrator | 2025-09-02 00:52:42.317753 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-02 00:52:42.317759 | orchestrator | Tuesday 02 September 2025 00:50:53 +0000 (0:00:03.088) 0:04:49.237 ***** 2025-09-02 00:52:42.317764 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-02 00:52:42.317770 | orchestrator | 2025-09-02 00:52:42.317775 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-02 00:52:42.317781 | orchestrator | Tuesday 02 September 2025 00:50:54 +0000 (0:00:00.840) 0:04:50.078 ***** 2025-09-02 00:52:42.317786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-02 00:52:42.317792 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.317797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-02 00:52:42.317807 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.317812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-02 00:52:42.317818 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.317823 | orchestrator | 2025-09-02 00:52:42.317829 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-02 00:52:42.317835 | orchestrator | Tuesday 02 September 2025 00:50:55 +0000 (0:00:01.341) 0:04:51.419 ***** 2025-09-02 00:52:42.317840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-02 00:52:42.317846 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.317851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-02 00:52:42.317857 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.317878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-02 00:52:42.317884 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.317889 | orchestrator | 2025-09-02 00:52:42.317897 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-02 00:52:42.317902 | orchestrator | Tuesday 02 September 2025 00:50:57 +0000 (0:00:01.356) 0:04:52.776 ***** 2025-09-02 00:52:42.317906 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.317911 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.317916 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.317921 | orchestrator | 2025-09-02 00:52:42.317926 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-02 00:52:42.317930 | orchestrator | Tuesday 02 September 2025 00:50:58 +0000 (0:00:01.620) 0:04:54.396 ***** 2025-09-02 00:52:42.317935 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.317940 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.317948 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.317953 | orchestrator | 2025-09-02 00:52:42.317958 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-02 00:52:42.317962 | orchestrator | Tuesday 02 September 2025 00:51:00 +0000 (0:00:02.330) 0:04:56.727 ***** 2025-09-02 00:52:42.317967 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.317972 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.317977 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.317982 | orchestrator | 2025-09-02 00:52:42.317987 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-02 00:52:42.317991 | orchestrator | Tuesday 02 September 2025 00:51:04 +0000 (0:00:03.215) 0:04:59.943 ***** 2025-09-02 00:52:42.317996 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.318001 | orchestrator | 2025-09-02 00:52:42.318006 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-02 00:52:42.318011 | orchestrator | Tuesday 02 September 2025 00:51:05 +0000 (0:00:01.651) 0:05:01.594 ***** 2025-09-02 00:52:42.318037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.318043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-02 00:52:42.318048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.318086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.318091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-02 00:52:42.318096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.318120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-02 00:52:42.318138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.318143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.318158 | orchestrator | 2025-09-02 00:52:42.318163 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-02 00:52:42.318168 | orchestrator | Tuesday 02 September 2025 00:51:09 +0000 (0:00:03.458) 0:05:05.053 ***** 2025-09-02 00:52:42.318187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.318199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-02 00:52:42.318204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.318220 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.318225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.318243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-02 00:52:42.318256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.318271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.318276 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.318281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-02 00:52:42.318286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-02 00:52:42.318329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-02 00:52:42.318334 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.318339 | orchestrator | 2025-09-02 00:52:42.318344 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-02 00:52:42.318349 | orchestrator | Tuesday 02 September 2025 00:51:10 +0000 (0:00:00.729) 0:05:05.782 ***** 2025-09-02 00:52:42.318354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-02 00:52:42.318359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-02 00:52:42.318364 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.318369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-02 00:52:42.318374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-02 00:52:42.318379 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.318384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-02 00:52:42.318389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-02 00:52:42.318394 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.318399 | orchestrator | 2025-09-02 00:52:42.318404 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-02 00:52:42.318409 | orchestrator | Tuesday 02 September 2025 00:51:11 +0000 (0:00:01.502) 0:05:07.285 ***** 2025-09-02 00:52:42.318414 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.318418 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.318423 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.318428 | orchestrator | 2025-09-02 00:52:42.318433 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-02 00:52:42.318438 | orchestrator | Tuesday 02 September 2025 00:51:13 +0000 (0:00:01.479) 0:05:08.765 ***** 2025-09-02 00:52:42.318446 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.318451 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.318456 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.318461 | orchestrator | 2025-09-02 00:52:42.318466 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-02 00:52:42.318470 | orchestrator | Tuesday 02 September 2025 00:51:15 +0000 (0:00:02.154) 0:05:10.919 ***** 2025-09-02 00:52:42.318475 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.318480 | orchestrator | 2025-09-02 00:52:42.318485 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-02 00:52:42.318490 | orchestrator | Tuesday 02 September 2025 00:51:16 +0000 (0:00:01.457) 0:05:12.376 ***** 2025-09-02 00:52:42.318509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:52:42.318518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:52:42.318523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:52:42.318529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:52:42.318552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:52:42.318561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:52:42.318566 | orchestrator | 2025-09-02 00:52:42.318571 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-02 00:52:42.318576 | orchestrator | Tuesday 02 September 2025 00:51:22 +0000 (0:00:05.565) 0:05:17.942 ***** 2025-09-02 00:52:42.318581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-02 00:52:42.318587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-02 00:52:42.318595 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.318600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-02 00:52:42.318622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-02 00:52:42.318628 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.318633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-02 00:52:42.318638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-02 00:52:42.318647 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.318652 | orchestrator | 2025-09-02 00:52:42.318657 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-02 00:52:42.318661 | orchestrator | Tuesday 02 September 2025 00:51:22 +0000 (0:00:00.669) 0:05:18.612 ***** 2025-09-02 00:52:42.318666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-02 00:52:42.318671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-02 00:52:42.318677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-02 00:52:42.318681 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.318686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-02 00:52:42.318704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-02 00:52:42.318710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-02 00:52:42.318715 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.318724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-02 00:52:42.318729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-02 00:52:42.318734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-02 00:52:42.318738 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.318743 | orchestrator | 2025-09-02 00:52:42.318748 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-02 00:52:42.318753 | orchestrator | Tuesday 02 September 2025 00:51:23 +0000 (0:00:00.933) 0:05:19.546 ***** 2025-09-02 00:52:42.318758 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.318763 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.318768 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.318772 | orchestrator | 2025-09-02 00:52:42.318777 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-02 00:52:42.318785 | orchestrator | Tuesday 02 September 2025 00:51:24 +0000 (0:00:00.809) 0:05:20.356 ***** 2025-09-02 00:52:42.318790 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.318795 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.318799 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.318804 | orchestrator | 2025-09-02 00:52:42.318809 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-02 00:52:42.318814 | orchestrator | Tuesday 02 September 2025 00:51:25 +0000 (0:00:01.326) 0:05:21.682 ***** 2025-09-02 00:52:42.318819 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.318823 | orchestrator | 2025-09-02 00:52:42.318828 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-02 00:52:42.318833 | orchestrator | Tuesday 02 September 2025 00:51:27 +0000 (0:00:01.435) 0:05:23.117 ***** 2025-09-02 00:52:42.318838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-02 00:52:42.318844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 00:52:42.318849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.318868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.318876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.318882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-02 00:52:42.318890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 00:52:42.318896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.318901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-02 00:52:42.318906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.318926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.318934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 00:52:42.318942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.318947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.318952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.318957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-02 00:52:42.318969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-02 00:52:42.318976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.318984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.318990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.318995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-02 00:52:42.319000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-02 00:52:42.319009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-02 00:52:42.319016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-02 00:52:42.319030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.319045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.319057 | orchestrator | 2025-09-02 00:52:42.319062 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-02 00:52:42.319067 | orchestrator | Tuesday 02 September 2025 00:51:31 +0000 (0:00:04.499) 0:05:27.617 ***** 2025-09-02 00:52:42.319075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-02 00:52:42.319080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 00:52:42.319096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.319114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-02 00:52:42.319122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-02 00:52:42.319131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.319146 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-02 00:52:42.319157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 00:52:42.319164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.319188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-02 00:52:42.319193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-02 00:52:42.319198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.319221 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-02 00:52:42.319235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 00:52:42.319240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.319257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-02 00:52:42.319269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-02 00:52:42.319274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 00:52:42.319284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 00:52:42.319289 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319294 | orchestrator | 2025-09-02 00:52:42.319299 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-02 00:52:42.319304 | orchestrator | Tuesday 02 September 2025 00:51:33 +0000 (0:00:01.256) 0:05:28.873 ***** 2025-09-02 00:52:42.319323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-02 00:52:42.319328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-02 00:52:42.319333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-02 00:52:42.319342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-02 00:52:42.319348 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-02 00:52:42.319360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-02 00:52:42.319366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-02 00:52:42.319374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-02 00:52:42.319379 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-02 00:52:42.319389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-02 00:52:42.319394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-02 00:52:42.319399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-02 00:52:42.319404 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319409 | orchestrator | 2025-09-02 00:52:42.319413 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-02 00:52:42.319418 | orchestrator | Tuesday 02 September 2025 00:51:34 +0000 (0:00:01.022) 0:05:29.896 ***** 2025-09-02 00:52:42.319423 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319428 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319433 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319438 | orchestrator | 2025-09-02 00:52:42.319443 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-02 00:52:42.319447 | orchestrator | Tuesday 02 September 2025 00:51:34 +0000 (0:00:00.467) 0:05:30.364 ***** 2025-09-02 00:52:42.319452 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319457 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319462 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319466 | orchestrator | 2025-09-02 00:52:42.319471 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-02 00:52:42.319480 | orchestrator | Tuesday 02 September 2025 00:51:36 +0000 (0:00:01.525) 0:05:31.889 ***** 2025-09-02 00:52:42.319485 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.319490 | orchestrator | 2025-09-02 00:52:42.319495 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-02 00:52:42.319499 | orchestrator | Tuesday 02 September 2025 00:51:37 +0000 (0:00:01.761) 0:05:33.651 ***** 2025-09-02 00:52:42.319504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:52:42.319516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:52:42.319521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-02 00:52:42.319527 | orchestrator | 2025-09-02 00:52:42.319532 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-02 00:52:42.319537 | orchestrator | Tuesday 02 September 2025 00:51:40 +0000 (0:00:02.502) 0:05:36.153 ***** 2025-09-02 00:52:42.319542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-02 00:52:42.319551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-02 00:52:42.319556 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319561 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-02 00:52:42.319577 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319582 | orchestrator | 2025-09-02 00:52:42.319587 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-02 00:52:42.319592 | orchestrator | Tuesday 02 September 2025 00:51:40 +0000 (0:00:00.431) 0:05:36.584 ***** 2025-09-02 00:52:42.319596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-02 00:52:42.319601 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-02 00:52:42.319611 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-02 00:52:42.319621 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319626 | orchestrator | 2025-09-02 00:52:42.319631 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-02 00:52:42.319639 | orchestrator | Tuesday 02 September 2025 00:51:41 +0000 (0:00:00.985) 0:05:37.570 ***** 2025-09-02 00:52:42.319644 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319649 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319654 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319659 | orchestrator | 2025-09-02 00:52:42.319663 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-02 00:52:42.319668 | orchestrator | Tuesday 02 September 2025 00:51:42 +0000 (0:00:00.425) 0:05:37.995 ***** 2025-09-02 00:52:42.319673 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319678 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319683 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319687 | orchestrator | 2025-09-02 00:52:42.319692 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-02 00:52:42.319697 | orchestrator | Tuesday 02 September 2025 00:51:43 +0000 (0:00:01.344) 0:05:39.339 ***** 2025-09-02 00:52:42.319702 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:52:42.319707 | orchestrator | 2025-09-02 00:52:42.319712 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-02 00:52:42.319717 | orchestrator | Tuesday 02 September 2025 00:51:45 +0000 (0:00:01.757) 0:05:41.096 ***** 2025-09-02 00:52:42.319722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.319729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.319737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.319748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.319753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.319759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-02 00:52:42.319764 | orchestrator | 2025-09-02 00:52:42.319771 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-02 00:52:42.319776 | orchestrator | Tuesday 02 September 2025 00:51:51 +0000 (0:00:06.276) 0:05:47.373 ***** 2025-09-02 00:52:42.319784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.319795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.319800 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.319811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.319816 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.319832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-02 00:52:42.319841 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319846 | orchestrator | 2025-09-02 00:52:42.319851 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-02 00:52:42.319856 | orchestrator | Tuesday 02 September 2025 00:51:52 +0000 (0:00:00.703) 0:05:48.077 ***** 2025-09-02 00:52:42.319861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319881 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.319886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319906 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.319911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-02 00:52:42.319942 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.319948 | orchestrator | 2025-09-02 00:52:42.319952 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-02 00:52:42.319957 | orchestrator | Tuesday 02 September 2025 00:51:54 +0000 (0:00:01.683) 0:05:49.760 ***** 2025-09-02 00:52:42.319962 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.319967 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.319972 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.319976 | orchestrator | 2025-09-02 00:52:42.319981 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-02 00:52:42.319986 | orchestrator | Tuesday 02 September 2025 00:51:55 +0000 (0:00:01.357) 0:05:51.117 ***** 2025-09-02 00:52:42.319991 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.319996 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.320001 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.320005 | orchestrator | 2025-09-02 00:52:42.320010 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-02 00:52:42.320015 | orchestrator | Tuesday 02 September 2025 00:51:57 +0000 (0:00:02.167) 0:05:53.285 ***** 2025-09-02 00:52:42.320020 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320025 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320030 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320034 | orchestrator | 2025-09-02 00:52:42.320039 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-02 00:52:42.320044 | orchestrator | Tuesday 02 September 2025 00:51:57 +0000 (0:00:00.357) 0:05:53.642 ***** 2025-09-02 00:52:42.320049 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320054 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320058 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320063 | orchestrator | 2025-09-02 00:52:42.320068 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-02 00:52:42.320073 | orchestrator | Tuesday 02 September 2025 00:51:58 +0000 (0:00:00.362) 0:05:54.004 ***** 2025-09-02 00:52:42.320078 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320083 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320088 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320092 | orchestrator | 2025-09-02 00:52:42.320097 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-02 00:52:42.320102 | orchestrator | Tuesday 02 September 2025 00:51:58 +0000 (0:00:00.667) 0:05:54.672 ***** 2025-09-02 00:52:42.320107 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320112 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320117 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320121 | orchestrator | 2025-09-02 00:52:42.320126 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-02 00:52:42.320131 | orchestrator | Tuesday 02 September 2025 00:51:59 +0000 (0:00:00.329) 0:05:55.002 ***** 2025-09-02 00:52:42.320136 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320141 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320146 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320150 | orchestrator | 2025-09-02 00:52:42.320155 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-02 00:52:42.320160 | orchestrator | Tuesday 02 September 2025 00:51:59 +0000 (0:00:00.303) 0:05:55.306 ***** 2025-09-02 00:52:42.320165 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320170 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320175 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320179 | orchestrator | 2025-09-02 00:52:42.320184 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-02 00:52:42.320193 | orchestrator | Tuesday 02 September 2025 00:52:00 +0000 (0:00:00.818) 0:05:56.124 ***** 2025-09-02 00:52:42.320198 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.320203 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.320208 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.320213 | orchestrator | 2025-09-02 00:52:42.320217 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-02 00:52:42.320222 | orchestrator | Tuesday 02 September 2025 00:52:01 +0000 (0:00:00.720) 0:05:56.845 ***** 2025-09-02 00:52:42.320227 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.320232 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.320237 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.320242 | orchestrator | 2025-09-02 00:52:42.320247 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-02 00:52:42.320251 | orchestrator | Tuesday 02 September 2025 00:52:01 +0000 (0:00:00.372) 0:05:57.217 ***** 2025-09-02 00:52:42.320256 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.320261 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.320266 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.320271 | orchestrator | 2025-09-02 00:52:42.320276 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-02 00:52:42.320280 | orchestrator | Tuesday 02 September 2025 00:52:02 +0000 (0:00:00.961) 0:05:58.179 ***** 2025-09-02 00:52:42.320285 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.320290 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.320295 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.320300 | orchestrator | 2025-09-02 00:52:42.320305 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-02 00:52:42.320342 | orchestrator | Tuesday 02 September 2025 00:52:03 +0000 (0:00:01.234) 0:05:59.414 ***** 2025-09-02 00:52:42.320347 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.320352 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.320359 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.320364 | orchestrator | 2025-09-02 00:52:42.320369 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-02 00:52:42.320374 | orchestrator | Tuesday 02 September 2025 00:52:04 +0000 (0:00:00.907) 0:06:00.322 ***** 2025-09-02 00:52:42.320379 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.320384 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.320389 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.320394 | orchestrator | 2025-09-02 00:52:42.320398 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-02 00:52:42.320403 | orchestrator | Tuesday 02 September 2025 00:52:09 +0000 (0:00:05.087) 0:06:05.409 ***** 2025-09-02 00:52:42.320408 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.320413 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.320418 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.320422 | orchestrator | 2025-09-02 00:52:42.320430 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-02 00:52:42.320436 | orchestrator | Tuesday 02 September 2025 00:52:13 +0000 (0:00:03.798) 0:06:09.208 ***** 2025-09-02 00:52:42.320440 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.320445 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.320450 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.320455 | orchestrator | 2025-09-02 00:52:42.320460 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-02 00:52:42.320465 | orchestrator | Tuesday 02 September 2025 00:52:26 +0000 (0:00:13.060) 0:06:22.268 ***** 2025-09-02 00:52:42.320469 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.320475 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.320479 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.320484 | orchestrator | 2025-09-02 00:52:42.320489 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-02 00:52:42.320494 | orchestrator | Tuesday 02 September 2025 00:52:27 +0000 (0:00:01.120) 0:06:23.389 ***** 2025-09-02 00:52:42.320503 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:52:42.320508 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:52:42.320513 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:52:42.320518 | orchestrator | 2025-09-02 00:52:42.320523 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-02 00:52:42.320528 | orchestrator | Tuesday 02 September 2025 00:52:32 +0000 (0:00:04.572) 0:06:27.962 ***** 2025-09-02 00:52:42.320532 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320537 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320541 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320546 | orchestrator | 2025-09-02 00:52:42.320550 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-02 00:52:42.320555 | orchestrator | Tuesday 02 September 2025 00:52:32 +0000 (0:00:00.339) 0:06:28.301 ***** 2025-09-02 00:52:42.320560 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320564 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320569 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320573 | orchestrator | 2025-09-02 00:52:42.320578 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-02 00:52:42.320582 | orchestrator | Tuesday 02 September 2025 00:52:32 +0000 (0:00:00.346) 0:06:28.648 ***** 2025-09-02 00:52:42.320587 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320592 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320596 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320601 | orchestrator | 2025-09-02 00:52:42.320605 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-02 00:52:42.320610 | orchestrator | Tuesday 02 September 2025 00:52:33 +0000 (0:00:00.681) 0:06:29.329 ***** 2025-09-02 00:52:42.320614 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320619 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320623 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320628 | orchestrator | 2025-09-02 00:52:42.320632 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-02 00:52:42.320637 | orchestrator | Tuesday 02 September 2025 00:52:33 +0000 (0:00:00.370) 0:06:29.699 ***** 2025-09-02 00:52:42.320642 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320646 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320651 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320655 | orchestrator | 2025-09-02 00:52:42.320660 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-02 00:52:42.320665 | orchestrator | Tuesday 02 September 2025 00:52:34 +0000 (0:00:00.356) 0:06:30.056 ***** 2025-09-02 00:52:42.320669 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:52:42.320674 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:52:42.320678 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:52:42.320683 | orchestrator | 2025-09-02 00:52:42.320688 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-02 00:52:42.320692 | orchestrator | Tuesday 02 September 2025 00:52:34 +0000 (0:00:00.375) 0:06:30.431 ***** 2025-09-02 00:52:42.320697 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.320702 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.320706 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.320711 | orchestrator | 2025-09-02 00:52:42.320716 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-02 00:52:42.320720 | orchestrator | Tuesday 02 September 2025 00:52:40 +0000 (0:00:05.336) 0:06:35.768 ***** 2025-09-02 00:52:42.320725 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:52:42.320730 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:52:42.320734 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:52:42.320739 | orchestrator | 2025-09-02 00:52:42.320743 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:52:42.320748 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-02 00:52:42.320757 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-02 00:52:42.320762 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-02 00:52:42.320767 | orchestrator | 2025-09-02 00:52:42.320771 | orchestrator | 2025-09-02 00:52:42.320778 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:52:42.320783 | orchestrator | Tuesday 02 September 2025 00:52:40 +0000 (0:00:00.815) 0:06:36.584 ***** 2025-09-02 00:52:42.320787 | orchestrator | =============================================================================== 2025-09-02 00:52:42.320792 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.06s 2025-09-02 00:52:42.320797 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.99s 2025-09-02 00:52:42.320801 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.28s 2025-09-02 00:52:42.320809 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 5.59s 2025-09-02 00:52:42.320814 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.57s 2025-09-02 00:52:42.320818 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.34s 2025-09-02 00:52:42.320823 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.09s 2025-09-02 00:52:42.320827 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.79s 2025-09-02 00:52:42.320832 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.68s 2025-09-02 00:52:42.320837 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.65s 2025-09-02 00:52:42.320841 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.57s 2025-09-02 00:52:42.320846 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.50s 2025-09-02 00:52:42.320850 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.41s 2025-09-02 00:52:42.320855 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.38s 2025-09-02 00:52:42.320859 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.37s 2025-09-02 00:52:42.320864 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.18s 2025-09-02 00:52:42.320869 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.15s 2025-09-02 00:52:42.320873 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.14s 2025-09-02 00:52:42.320878 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.11s 2025-09-02 00:52:42.320882 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 4.08s 2025-09-02 00:52:42.320887 | orchestrator | 2025-09-02 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:45.349424 | orchestrator | 2025-09-02 00:52:45 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:45.351535 | orchestrator | 2025-09-02 00:52:45 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:52:45.352456 | orchestrator | 2025-09-02 00:52:45 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:52:45.352858 | orchestrator | 2025-09-02 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:48.406791 | orchestrator | 2025-09-02 00:52:48 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:48.407379 | orchestrator | 2025-09-02 00:52:48 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:52:48.408124 | orchestrator | 2025-09-02 00:52:48 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:52:48.408535 | orchestrator | 2025-09-02 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:51.466494 | orchestrator | 2025-09-02 00:52:51 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:51.466737 | orchestrator | 2025-09-02 00:52:51 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:52:51.467665 | orchestrator | 2025-09-02 00:52:51 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:52:51.467776 | orchestrator | 2025-09-02 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:54.504512 | orchestrator | 2025-09-02 00:52:54 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:54.505383 | orchestrator | 2025-09-02 00:52:54 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:52:54.506502 | orchestrator | 2025-09-02 00:52:54 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:52:54.506874 | orchestrator | 2025-09-02 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:52:57.556707 | orchestrator | 2025-09-02 00:52:57 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:52:57.557730 | orchestrator | 2025-09-02 00:52:57 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:52:57.560056 | orchestrator | 2025-09-02 00:52:57 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:52:57.560142 | orchestrator | 2025-09-02 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:00.704404 | orchestrator | 2025-09-02 00:53:00 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:00.705067 | orchestrator | 2025-09-02 00:53:00 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:00.706628 | orchestrator | 2025-09-02 00:53:00 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:00.706673 | orchestrator | 2025-09-02 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:03.758776 | orchestrator | 2025-09-02 00:53:03 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:03.758889 | orchestrator | 2025-09-02 00:53:03 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:03.762946 | orchestrator | 2025-09-02 00:53:03 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:03.762980 | orchestrator | 2025-09-02 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:06.852463 | orchestrator | 2025-09-02 00:53:06 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:06.854417 | orchestrator | 2025-09-02 00:53:06 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:06.855321 | orchestrator | 2025-09-02 00:53:06 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:06.857823 | orchestrator | 2025-09-02 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:09.908005 | orchestrator | 2025-09-02 00:53:09 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:09.908106 | orchestrator | 2025-09-02 00:53:09 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:09.908119 | orchestrator | 2025-09-02 00:53:09 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:09.908129 | orchestrator | 2025-09-02 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:12.936980 | orchestrator | 2025-09-02 00:53:12 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:12.937241 | orchestrator | 2025-09-02 00:53:12 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:12.938577 | orchestrator | 2025-09-02 00:53:12 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:12.938628 | orchestrator | 2025-09-02 00:53:12 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:15.980085 | orchestrator | 2025-09-02 00:53:15 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:15.983995 | orchestrator | 2025-09-02 00:53:15 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:15.985742 | orchestrator | 2025-09-02 00:53:15 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:15.986198 | orchestrator | 2025-09-02 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:19.041498 | orchestrator | 2025-09-02 00:53:19 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:19.042717 | orchestrator | 2025-09-02 00:53:19 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:19.044471 | orchestrator | 2025-09-02 00:53:19 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:19.044512 | orchestrator | 2025-09-02 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:22.114258 | orchestrator | 2025-09-02 00:53:22 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:22.115787 | orchestrator | 2025-09-02 00:53:22 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:22.117464 | orchestrator | 2025-09-02 00:53:22 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:22.117637 | orchestrator | 2025-09-02 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:25.156126 | orchestrator | 2025-09-02 00:53:25 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:25.156599 | orchestrator | 2025-09-02 00:53:25 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:25.158913 | orchestrator | 2025-09-02 00:53:25 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:25.158938 | orchestrator | 2025-09-02 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:28.203408 | orchestrator | 2025-09-02 00:53:28 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:28.205005 | orchestrator | 2025-09-02 00:53:28 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:28.207306 | orchestrator | 2025-09-02 00:53:28 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:28.207676 | orchestrator | 2025-09-02 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:31.252231 | orchestrator | 2025-09-02 00:53:31 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:31.252820 | orchestrator | 2025-09-02 00:53:31 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:31.255290 | orchestrator | 2025-09-02 00:53:31 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:31.255344 | orchestrator | 2025-09-02 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:34.304593 | orchestrator | 2025-09-02 00:53:34 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:34.306158 | orchestrator | 2025-09-02 00:53:34 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:34.309609 | orchestrator | 2025-09-02 00:53:34 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:34.309814 | orchestrator | 2025-09-02 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:37.346472 | orchestrator | 2025-09-02 00:53:37 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:37.346723 | orchestrator | 2025-09-02 00:53:37 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:37.347572 | orchestrator | 2025-09-02 00:53:37 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:37.347681 | orchestrator | 2025-09-02 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:40.397434 | orchestrator | 2025-09-02 00:53:40 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:40.398530 | orchestrator | 2025-09-02 00:53:40 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:40.399707 | orchestrator | 2025-09-02 00:53:40 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:40.403440 | orchestrator | 2025-09-02 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:43.445820 | orchestrator | 2025-09-02 00:53:43 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:43.448743 | orchestrator | 2025-09-02 00:53:43 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:43.450549 | orchestrator | 2025-09-02 00:53:43 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:43.450598 | orchestrator | 2025-09-02 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:46.492516 | orchestrator | 2025-09-02 00:53:46 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:46.494350 | orchestrator | 2025-09-02 00:53:46 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:46.497169 | orchestrator | 2025-09-02 00:53:46 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:46.497197 | orchestrator | 2025-09-02 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:49.543517 | orchestrator | 2025-09-02 00:53:49 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:49.544316 | orchestrator | 2025-09-02 00:53:49 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:49.546943 | orchestrator | 2025-09-02 00:53:49 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:49.546969 | orchestrator | 2025-09-02 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:52.589068 | orchestrator | 2025-09-02 00:53:52 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:52.589630 | orchestrator | 2025-09-02 00:53:52 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:52.590582 | orchestrator | 2025-09-02 00:53:52 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:52.590844 | orchestrator | 2025-09-02 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:55.649581 | orchestrator | 2025-09-02 00:53:55 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:55.651881 | orchestrator | 2025-09-02 00:53:55 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:55.653031 | orchestrator | 2025-09-02 00:53:55 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:55.653096 | orchestrator | 2025-09-02 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:53:58.698284 | orchestrator | 2025-09-02 00:53:58 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:53:58.700423 | orchestrator | 2025-09-02 00:53:58 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:53:58.703246 | orchestrator | 2025-09-02 00:53:58 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:53:58.703278 | orchestrator | 2025-09-02 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:01.745307 | orchestrator | 2025-09-02 00:54:01 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:01.747761 | orchestrator | 2025-09-02 00:54:01 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:01.750352 | orchestrator | 2025-09-02 00:54:01 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:01.750524 | orchestrator | 2025-09-02 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:04.798782 | orchestrator | 2025-09-02 00:54:04 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:04.801113 | orchestrator | 2025-09-02 00:54:04 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:04.803458 | orchestrator | 2025-09-02 00:54:04 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:04.803497 | orchestrator | 2025-09-02 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:07.851618 | orchestrator | 2025-09-02 00:54:07 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:07.852545 | orchestrator | 2025-09-02 00:54:07 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:07.854629 | orchestrator | 2025-09-02 00:54:07 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:07.854666 | orchestrator | 2025-09-02 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:10.907810 | orchestrator | 2025-09-02 00:54:10 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:10.910186 | orchestrator | 2025-09-02 00:54:10 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:10.912373 | orchestrator | 2025-09-02 00:54:10 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:10.912697 | orchestrator | 2025-09-02 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:13.958819 | orchestrator | 2025-09-02 00:54:13 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:13.960506 | orchestrator | 2025-09-02 00:54:13 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:13.962565 | orchestrator | 2025-09-02 00:54:13 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:13.962692 | orchestrator | 2025-09-02 00:54:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:17.026171 | orchestrator | 2025-09-02 00:54:17 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:17.027649 | orchestrator | 2025-09-02 00:54:17 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:17.029567 | orchestrator | 2025-09-02 00:54:17 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:17.030097 | orchestrator | 2025-09-02 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:20.075734 | orchestrator | 2025-09-02 00:54:20 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:20.077834 | orchestrator | 2025-09-02 00:54:20 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:20.078834 | orchestrator | 2025-09-02 00:54:20 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:20.079094 | orchestrator | 2025-09-02 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:23.129384 | orchestrator | 2025-09-02 00:54:23 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:23.130948 | orchestrator | 2025-09-02 00:54:23 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:23.132874 | orchestrator | 2025-09-02 00:54:23 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:23.132914 | orchestrator | 2025-09-02 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:26.177732 | orchestrator | 2025-09-02 00:54:26 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:26.179612 | orchestrator | 2025-09-02 00:54:26 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:26.182685 | orchestrator | 2025-09-02 00:54:26 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:26.182708 | orchestrator | 2025-09-02 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:29.227329 | orchestrator | 2025-09-02 00:54:29 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:29.229084 | orchestrator | 2025-09-02 00:54:29 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:29.231272 | orchestrator | 2025-09-02 00:54:29 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:29.231379 | orchestrator | 2025-09-02 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:32.274684 | orchestrator | 2025-09-02 00:54:32 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:32.277516 | orchestrator | 2025-09-02 00:54:32 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:32.281474 | orchestrator | 2025-09-02 00:54:32 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:32.281966 | orchestrator | 2025-09-02 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:35.331832 | orchestrator | 2025-09-02 00:54:35 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:35.334143 | orchestrator | 2025-09-02 00:54:35 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:35.337144 | orchestrator | 2025-09-02 00:54:35 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:35.337168 | orchestrator | 2025-09-02 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:38.372519 | orchestrator | 2025-09-02 00:54:38 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:38.374480 | orchestrator | 2025-09-02 00:54:38 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:38.377368 | orchestrator | 2025-09-02 00:54:38 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:38.377560 | orchestrator | 2025-09-02 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:41.423570 | orchestrator | 2025-09-02 00:54:41 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:41.426299 | orchestrator | 2025-09-02 00:54:41 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:41.428480 | orchestrator | 2025-09-02 00:54:41 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:41.428507 | orchestrator | 2025-09-02 00:54:41 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:44.470199 | orchestrator | 2025-09-02 00:54:44 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:44.471490 | orchestrator | 2025-09-02 00:54:44 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:44.472910 | orchestrator | 2025-09-02 00:54:44 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:44.473009 | orchestrator | 2025-09-02 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:47.521648 | orchestrator | 2025-09-02 00:54:47 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:47.523529 | orchestrator | 2025-09-02 00:54:47 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:47.525118 | orchestrator | 2025-09-02 00:54:47 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:47.525142 | orchestrator | 2025-09-02 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:50.583398 | orchestrator | 2025-09-02 00:54:50 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:50.585038 | orchestrator | 2025-09-02 00:54:50 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:50.586822 | orchestrator | 2025-09-02 00:54:50 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:50.586918 | orchestrator | 2025-09-02 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:53.640140 | orchestrator | 2025-09-02 00:54:53 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:53.640629 | orchestrator | 2025-09-02 00:54:53 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:53.643081 | orchestrator | 2025-09-02 00:54:53 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:53.643221 | orchestrator | 2025-09-02 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:56.693928 | orchestrator | 2025-09-02 00:54:56 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:56.694862 | orchestrator | 2025-09-02 00:54:56 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:56.697239 | orchestrator | 2025-09-02 00:54:56 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:56.697520 | orchestrator | 2025-09-02 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:54:59.754610 | orchestrator | 2025-09-02 00:54:59 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:54:59.756682 | orchestrator | 2025-09-02 00:54:59 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:54:59.761014 | orchestrator | 2025-09-02 00:54:59 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:54:59.761072 | orchestrator | 2025-09-02 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:02.814311 | orchestrator | 2025-09-02 00:55:02 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state STARTED 2025-09-02 00:55:02.814740 | orchestrator | 2025-09-02 00:55:02 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:02.815911 | orchestrator | 2025-09-02 00:55:02 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:02.816089 | orchestrator | 2025-09-02 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:05.877731 | orchestrator | 2025-09-02 00:55:05 | INFO  | Task ca9f369c-1fc1-4e72-bbc1-684dcae7a9ff is in state SUCCESS 2025-09-02 00:55:05.880088 | orchestrator | 2025-09-02 00:55:05.880169 | orchestrator | 2025-09-02 00:55:05.880187 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-02 00:55:05.880200 | orchestrator | 2025-09-02 00:55:05.880212 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-02 00:55:05.880223 | orchestrator | Tuesday 02 September 2025 00:43:25 +0000 (0:00:01.038) 0:00:01.038 ***** 2025-09-02 00:55:05.880307 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.880321 | orchestrator | 2025-09-02 00:55:05.880333 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-02 00:55:05.880344 | orchestrator | Tuesday 02 September 2025 00:43:27 +0000 (0:00:01.480) 0:00:02.518 ***** 2025-09-02 00:55:05.880355 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.880366 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.880377 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.880388 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.880399 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.880410 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.880510 | orchestrator | 2025-09-02 00:55:05.880526 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-02 00:55:05.880537 | orchestrator | Tuesday 02 September 2025 00:43:28 +0000 (0:00:01.777) 0:00:04.295 ***** 2025-09-02 00:55:05.880548 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.880559 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.880570 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.880580 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.880592 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.880603 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.880636 | orchestrator | 2025-09-02 00:55:05.880648 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-02 00:55:05.880659 | orchestrator | Tuesday 02 September 2025 00:43:29 +0000 (0:00:00.710) 0:00:05.006 ***** 2025-09-02 00:55:05.880670 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.880743 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.880760 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.880773 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.880783 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.880794 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.880847 | orchestrator | 2025-09-02 00:55:05.880859 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-02 00:55:05.880871 | orchestrator | Tuesday 02 September 2025 00:43:30 +0000 (0:00:00.984) 0:00:05.991 ***** 2025-09-02 00:55:05.880882 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.880893 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.880904 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.880914 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.880925 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.880936 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.880946 | orchestrator | 2025-09-02 00:55:05.880958 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-02 00:55:05.880969 | orchestrator | Tuesday 02 September 2025 00:43:31 +0000 (0:00:00.819) 0:00:06.810 ***** 2025-09-02 00:55:05.881087 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.881099 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.881110 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.881133 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.881144 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.881155 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.881166 | orchestrator | 2025-09-02 00:55:05.881177 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-02 00:55:05.881270 | orchestrator | Tuesday 02 September 2025 00:43:32 +0000 (0:00:00.653) 0:00:07.464 ***** 2025-09-02 00:55:05.881285 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.881296 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.881307 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.881318 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.881328 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.881339 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.881350 | orchestrator | 2025-09-02 00:55:05.881361 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-02 00:55:05.881372 | orchestrator | Tuesday 02 September 2025 00:43:33 +0000 (0:00:00.984) 0:00:08.448 ***** 2025-09-02 00:55:05.881384 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.881420 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.881515 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.881572 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.881584 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.881595 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.881606 | orchestrator | 2025-09-02 00:55:05.881617 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-02 00:55:05.881628 | orchestrator | Tuesday 02 September 2025 00:43:34 +0000 (0:00:00.985) 0:00:09.433 ***** 2025-09-02 00:55:05.881639 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.881650 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.881661 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.881672 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.881683 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.881694 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.881705 | orchestrator | 2025-09-02 00:55:05.881716 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-02 00:55:05.881726 | orchestrator | Tuesday 02 September 2025 00:43:35 +0000 (0:00:01.330) 0:00:10.764 ***** 2025-09-02 00:55:05.881737 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-02 00:55:05.881807 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:55:05.881820 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:55:05.881832 | orchestrator | 2025-09-02 00:55:05.881843 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-02 00:55:05.881853 | orchestrator | Tuesday 02 September 2025 00:43:36 +0000 (0:00:00.848) 0:00:11.612 ***** 2025-09-02 00:55:05.881864 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.881875 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.881886 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.881896 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.881907 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.881918 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.881929 | orchestrator | 2025-09-02 00:55:05.881954 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-02 00:55:05.881966 | orchestrator | Tuesday 02 September 2025 00:43:37 +0000 (0:00:01.350) 0:00:12.963 ***** 2025-09-02 00:55:05.881977 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-02 00:55:05.881988 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:55:05.882159 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:55:05.882187 | orchestrator | 2025-09-02 00:55:05.882198 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-02 00:55:05.882209 | orchestrator | Tuesday 02 September 2025 00:43:40 +0000 (0:00:03.145) 0:00:16.109 ***** 2025-09-02 00:55:05.882221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-02 00:55:05.882232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-02 00:55:05.882243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-02 00:55:05.882254 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.882265 | orchestrator | 2025-09-02 00:55:05.882277 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-02 00:55:05.882288 | orchestrator | Tuesday 02 September 2025 00:43:41 +0000 (0:00:00.501) 0:00:16.610 ***** 2025-09-02 00:55:05.882300 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.882314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.882325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.882372 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.882385 | orchestrator | 2025-09-02 00:55:05.882396 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-02 00:55:05.882407 | orchestrator | Tuesday 02 September 2025 00:43:42 +0000 (0:00:00.766) 0:00:17.377 ***** 2025-09-02 00:55:05.882453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.882545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.882558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.882569 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.882580 | orchestrator | 2025-09-02 00:55:05.882591 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-02 00:55:05.882602 | orchestrator | Tuesday 02 September 2025 00:43:42 +0000 (0:00:00.152) 0:00:17.529 ***** 2025-09-02 00:55:05.882626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-02 00:43:38.449673', 'end': '2025-09-02 00:43:38.739467', 'delta': '0:00:00.289794', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.882648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-02 00:43:39.394253', 'end': '2025-09-02 00:43:39.705198', 'delta': '0:00:00.310945', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.882672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-02 00:43:40.279527', 'end': '2025-09-02 00:43:40.605802', 'delta': '0:00:00.326275', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.882693 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.882705 | orchestrator | 2025-09-02 00:55:05.882716 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-02 00:55:05.882770 | orchestrator | Tuesday 02 September 2025 00:43:42 +0000 (0:00:00.615) 0:00:18.145 ***** 2025-09-02 00:55:05.882782 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.882794 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.882804 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.882815 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.882825 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.882934 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.882948 | orchestrator | 2025-09-02 00:55:05.882959 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-02 00:55:05.882970 | orchestrator | Tuesday 02 September 2025 00:43:44 +0000 (0:00:02.126) 0:00:20.271 ***** 2025-09-02 00:55:05.882987 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.882999 | orchestrator | 2025-09-02 00:55:05.883010 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-02 00:55:05.883021 | orchestrator | Tuesday 02 September 2025 00:43:45 +0000 (0:00:00.832) 0:00:21.103 ***** 2025-09-02 00:55:05.883032 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.883043 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.883054 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.883064 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.883075 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.883086 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.883097 | orchestrator | 2025-09-02 00:55:05.883108 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-02 00:55:05.883119 | orchestrator | Tuesday 02 September 2025 00:43:47 +0000 (0:00:01.480) 0:00:22.584 ***** 2025-09-02 00:55:05.883130 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.883141 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.883152 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.883162 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.883181 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.883192 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.883203 | orchestrator | 2025-09-02 00:55:05.883215 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-02 00:55:05.883225 | orchestrator | Tuesday 02 September 2025 00:43:49 +0000 (0:00:02.454) 0:00:25.039 ***** 2025-09-02 00:55:05.883237 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.883248 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.883258 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.883269 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.883280 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.883291 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.883302 | orchestrator | 2025-09-02 00:55:05.883313 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-02 00:55:05.883324 | orchestrator | Tuesday 02 September 2025 00:43:51 +0000 (0:00:01.378) 0:00:26.417 ***** 2025-09-02 00:55:05.883335 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.883346 | orchestrator | 2025-09-02 00:55:05.883356 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-02 00:55:05.883368 | orchestrator | Tuesday 02 September 2025 00:43:51 +0000 (0:00:00.189) 0:00:26.607 ***** 2025-09-02 00:55:05.883509 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.883524 | orchestrator | 2025-09-02 00:55:05.883535 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-02 00:55:05.883546 | orchestrator | Tuesday 02 September 2025 00:43:51 +0000 (0:00:00.461) 0:00:27.069 ***** 2025-09-02 00:55:05.883557 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.883568 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.883578 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.883589 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.883600 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.883611 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.883622 | orchestrator | 2025-09-02 00:55:05.883642 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-02 00:55:05.883819 | orchestrator | Tuesday 02 September 2025 00:43:52 +0000 (0:00:00.842) 0:00:27.911 ***** 2025-09-02 00:55:05.883838 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.883850 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.883861 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.883872 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.883883 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.883894 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.883904 | orchestrator | 2025-09-02 00:55:05.883916 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-02 00:55:05.883927 | orchestrator | Tuesday 02 September 2025 00:43:53 +0000 (0:00:01.171) 0:00:29.082 ***** 2025-09-02 00:55:05.883937 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.883948 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.883959 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.883970 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.883981 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.883992 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.884002 | orchestrator | 2025-09-02 00:55:05.884014 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-02 00:55:05.884025 | orchestrator | Tuesday 02 September 2025 00:43:54 +0000 (0:00:01.077) 0:00:30.160 ***** 2025-09-02 00:55:05.884036 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.884047 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.884058 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.884069 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.884079 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.884090 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.884101 | orchestrator | 2025-09-02 00:55:05.884122 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-02 00:55:05.884133 | orchestrator | Tuesday 02 September 2025 00:43:56 +0000 (0:00:01.767) 0:00:31.927 ***** 2025-09-02 00:55:05.884144 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.884155 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.884165 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.884176 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.884187 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.884198 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.884209 | orchestrator | 2025-09-02 00:55:05.884220 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-02 00:55:05.884231 | orchestrator | Tuesday 02 September 2025 00:43:57 +0000 (0:00:00.832) 0:00:32.760 ***** 2025-09-02 00:55:05.884242 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.884253 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.884264 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.884275 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.884286 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.884297 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.884308 | orchestrator | 2025-09-02 00:55:05.884319 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-02 00:55:05.884336 | orchestrator | Tuesday 02 September 2025 00:43:58 +0000 (0:00:01.008) 0:00:33.769 ***** 2025-09-02 00:55:05.884380 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.884393 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.884404 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.884415 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.884473 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.884485 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.884496 | orchestrator | 2025-09-02 00:55:05.884507 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-02 00:55:05.884518 | orchestrator | Tuesday 02 September 2025 00:43:59 +0000 (0:00:00.993) 0:00:34.762 ***** 2025-09-02 00:55:05.884530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c-osd--block--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c', 'dm-uuid-LVM-T6Z3P3nBZVBO8YdzD4wDcT6X0PQUZyHMzCQsTBTPAt7wdpbwDpTgKjlwaJnHX89S'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--688b3bb6--a638--5f84--8470--ce7969c766cd-osd--block--688b3bb6--a638--5f84--8470--ce7969c766cd', 'dm-uuid-LVM-5DFDHHLaMcqlr42LtK9y1ks0goXeiOLsVdQ3XQwJkrJPN1jGtt7yT9M7NmtcNE4W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.884705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c-osd--block--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-52ec4o-w5kU-dUIA-7pTt-Ivor-269A-qymOia', 'scsi-0QEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43', 'scsi-SQEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.884722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--688b3bb6--a638--5f84--8470--ce7969c766cd-osd--block--688b3bb6--a638--5f84--8470--ce7969c766cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pHjcBF-aLQ2-arb5-pD4s-mfWi-GfZC-LbvRyv', 'scsi-0QEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3', 'scsi-SQEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.884734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498', 'scsi-SQEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.884747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.884765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de858a7c--8c7c--5154--a7df--793b28d7d942-osd--block--de858a7c--8c7c--5154--a7df--793b28d7d942', 'dm-uuid-LVM-ma6ZNkFTI2pW677Dtsi99WvqlH4kOSHkgt2lGF6fu40s8l6PK4gUAx6Lp102tY7q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4843a7b7--fb51--5101--86f0--3e9039878e37-osd--block--4843a7b7--fb51--5101--86f0--3e9039878e37', 'dm-uuid-LVM-kwOU19wIHVYOI5Hf2Y4Yz3ryuAguNEcFccQ4JqaRPimMD4XfTDS8Iz6qATnMeiTA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884839 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.884850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ad19e49--f824--57b0--a164--7b3912efd317-osd--block--7ad19e49--f824--57b0--a164--7b3912efd317', 'dm-uuid-LVM-bvxsWt8LXX4MIwOUIceR1g502rbBdH0idmo7Hbn6tK08s02n2USNM6FAhFO2GmKO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14a05dcf--7776--5f2b--8543--65494bada47a-osd--block--14a05dcf--7776--5f2b--8543--65494bada47a', 'dm-uuid-LVM-7MaOfZrc4vC7t91s5rBv8cpEUSYWM9fFMtliVA2Gi7uzqZNfPQaDewuDOFuHo2GF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.884994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part1', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part14', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part15', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part16', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de858a7c--8c7c--5154--a7df--793b28d7d942-osd--block--de858a7c--8c7c--5154--a7df--793b28d7d942'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ETcDf6-YET1-mUgR-WJcn-lq56-yxxu-9IOrbI', 'scsi-0QEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd', 'scsi-SQEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4843a7b7--fb51--5101--86f0--3e9039878e37-osd--block--4843a7b7--fb51--5101--86f0--3e9039878e37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-f5ScAO-8MlN-r0n9-EgSW-3S8i-n1aV-MdwxRw', 'scsi-0QEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a', 'scsi-SQEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e', 'scsi-SQEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ad19e49--f824--57b0--a164--7b3912efd317-osd--block--7ad19e49--f824--57b0--a164--7b3912efd317'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uMgDuu-QQi3-CkEu-JTS0-eViq-CoBT-fXK4Qm', 'scsi-0QEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb', 'scsi-SQEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--14a05dcf--7776--5f2b--8543--65494bada47a-osd--block--14a05dcf--7776--5f2b--8543--65494bada47a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZPNxph-hsQl-foE8-Dl2I-UKJd-HrnJ-QvnxGG', 'scsi-0QEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6', 'scsi-SQEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70', 'scsi-SQEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885273 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.885289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part1', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part14', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part15', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part16', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885507 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.885517 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.885527 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.885537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:55:05.885662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part1', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part14', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part15', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part16', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:55:05.885696 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.885706 | orchestrator | 2025-09-02 00:55:05.885716 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-02 00:55:05.885726 | orchestrator | Tuesday 02 September 2025 00:44:02 +0000 (0:00:02.844) 0:00:37.607 ***** 2025-09-02 00:55:05.885736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c-osd--block--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c', 'dm-uuid-LVM-T6Z3P3nBZVBO8YdzD4wDcT6X0PQUZyHMzCQsTBTPAt7wdpbwDpTgKjlwaJnHX89S'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--688b3bb6--a638--5f84--8470--ce7969c766cd-osd--block--688b3bb6--a638--5f84--8470--ce7969c766cd', 'dm-uuid-LVM-5DFDHHLaMcqlr42LtK9y1ks0goXeiOLsVdQ3XQwJkrJPN1jGtt7yT9M7NmtcNE4W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885779 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885815 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de858a7c--8c7c--5154--a7df--793b28d7d942-osd--block--de858a7c--8c7c--5154--a7df--793b28d7d942', 'dm-uuid-LVM-ma6ZNkFTI2pW677Dtsi99WvqlH4kOSHkgt2lGF6fu40s8l6PK4gUAx6Lp102tY7q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885839 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4843a7b7--fb51--5101--86f0--3e9039878e37-osd--block--4843a7b7--fb51--5101--86f0--3e9039878e37', 'dm-uuid-LVM-kwOU19wIHVYOI5Hf2Y4Yz3ryuAguNEcFccQ4JqaRPimMD4XfTDS8Iz6qATnMeiTA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885907 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885945 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885955 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885965 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.885993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886005 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886063 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c-osd--block--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-52ec4o-w5kU-dUIA-7pTt-Ivor-269A-qymOia', 'scsi-0QEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43', 'scsi-SQEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886087 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--688b3bb6--a638--5f84--8470--ce7969c766cd-osd--block--688b3bb6--a638--5f84--8470--ce7969c766cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pHjcBF-aLQ2-arb5-pD4s-mfWi-GfZC-LbvRyv', 'scsi-0QEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3', 'scsi-SQEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886109 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498', 'scsi-SQEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886613 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886624 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886659 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ad19e49--f824--57b0--a164--7b3912efd317-osd--block--7ad19e49--f824--57b0--a164--7b3912efd317', 'dm-uuid-LVM-bvxsWt8LXX4MIwOUIceR1g502rbBdH0idmo7Hbn6tK08s02n2USNM6FAhFO2GmKO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886676 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14a05dcf--7776--5f2b--8543--65494bada47a-osd--block--14a05dcf--7776--5f2b--8543--65494bada47a', 'dm-uuid-LVM-7MaOfZrc4vC7t91s5rBv8cpEUSYWM9fFMtliVA2Gi7uzqZNfPQaDewuDOFuHo2GF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886692 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part1', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part14', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part15', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part16', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886709 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.886720 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de858a7c--8c7c--5154--a7df--793b28d7d942-osd--block--de858a7c--8c7c--5154--a7df--793b28d7d942'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ETcDf6-YET1-mUgR-WJcn-lq56-yxxu-9IOrbI', 'scsi-0QEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd', 'scsi-SQEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886745 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886756 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4843a7b7--fb51--5101--86f0--3e9039878e37-osd--block--4843a7b7--fb51--5101--86f0--3e9039878e37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-f5ScAO-8MlN-r0n9-EgSW-3S8i-n1aV-MdwxRw', 'scsi-0QEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a', 'scsi-SQEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886780 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886791 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e', 'scsi-SQEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886802 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886817 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886827 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886837 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886853 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886867 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886878 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ad19e49--f824--57b0--a164--7b3912efd317-osd--block--7ad19e49--f824--57b0--a164--7b3912efd317'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uMgDuu-QQi3-CkEu-JTS0-eViq-CoBT-fXK4Qm', 'scsi-0QEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb', 'scsi-SQEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886949 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--14a05dcf--7776--5f2b--8543--65494bada47a-osd--block--14a05dcf--7776--5f2b--8543--65494bada47a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZPNxph-hsQl-foE8-Dl2I-UKJd-HrnJ-QvnxGG', 'scsi-0QEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6', 'scsi-SQEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886978 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.886989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70', 'scsi-SQEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887005 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887020 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887031 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.887041 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887051 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887067 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887083 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887099 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part1', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part14', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part15', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part16', 'scsi-SQEMU_QEMU_HARDDISK_59f489c4-9d81-4778-bdf1-baefbcbe9222-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887110 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887126 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887142 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887153 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887167 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887178 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887190 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887207 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887225 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887243 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf440116-e340-4d35-9c90-505955753716-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887257 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887268 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.887280 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.887291 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.887314 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887326 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887338 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887356 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887369 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887382 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887399 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887417 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887453 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part1', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part14', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part15', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part16', 'scsi-SQEMU_QEMU_HARDDISK_44dae450-8362-4b96-8159-84e27a3f13ee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887468 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:55:05.887486 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.887496 | orchestrator | 2025-09-02 00:55:05.887506 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-02 00:55:05.887517 | orchestrator | Tuesday 02 September 2025 00:44:03 +0000 (0:00:01.458) 0:00:39.065 ***** 2025-09-02 00:55:05.887532 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.887543 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.887553 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.887562 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.887572 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.887582 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.887591 | orchestrator | 2025-09-02 00:55:05.887601 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-02 00:55:05.887611 | orchestrator | Tuesday 02 September 2025 00:44:05 +0000 (0:00:01.439) 0:00:40.504 ***** 2025-09-02 00:55:05.887621 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.887630 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.887640 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.887650 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.887660 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.887669 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.887679 | orchestrator | 2025-09-02 00:55:05.887689 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-02 00:55:05.887699 | orchestrator | Tuesday 02 September 2025 00:44:06 +0000 (0:00:01.196) 0:00:41.701 ***** 2025-09-02 00:55:05.887709 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.887718 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.887728 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.887739 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.887749 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.887759 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.887769 | orchestrator | 2025-09-02 00:55:05.887778 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-02 00:55:05.887788 | orchestrator | Tuesday 02 September 2025 00:44:07 +0000 (0:00:01.309) 0:00:43.011 ***** 2025-09-02 00:55:05.887798 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.887808 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.887818 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.887828 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.887838 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.887848 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.887858 | orchestrator | 2025-09-02 00:55:05.887868 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-02 00:55:05.887879 | orchestrator | Tuesday 02 September 2025 00:44:08 +0000 (0:00:01.135) 0:00:44.146 ***** 2025-09-02 00:55:05.887889 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.887898 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.887909 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.887919 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.887928 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.887938 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.887948 | orchestrator | 2025-09-02 00:55:05.887959 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-02 00:55:05.887969 | orchestrator | Tuesday 02 September 2025 00:44:09 +0000 (0:00:00.797) 0:00:44.944 ***** 2025-09-02 00:55:05.887979 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.887989 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.888000 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.888010 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.888020 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.888033 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.888044 | orchestrator | 2025-09-02 00:55:05.888054 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-02 00:55:05.888070 | orchestrator | Tuesday 02 September 2025 00:44:11 +0000 (0:00:01.586) 0:00:46.530 ***** 2025-09-02 00:55:05.888081 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-02 00:55:05.888092 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-02 00:55:05.888103 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-02 00:55:05.888113 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-02 00:55:05.888123 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-02 00:55:05.888134 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-02 00:55:05.888144 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-02 00:55:05.888153 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-02 00:55:05.888163 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-02 00:55:05.888173 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-02 00:55:05.888182 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-02 00:55:05.888192 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-02 00:55:05.888202 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-02 00:55:05.888211 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-02 00:55:05.888221 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-02 00:55:05.888230 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-02 00:55:05.888240 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-02 00:55:05.888250 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-02 00:55:05.888260 | orchestrator | 2025-09-02 00:55:05.888269 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-02 00:55:05.888279 | orchestrator | Tuesday 02 September 2025 00:44:14 +0000 (0:00:02.861) 0:00:49.392 ***** 2025-09-02 00:55:05.888289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-02 00:55:05.888299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-02 00:55:05.888308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-02 00:55:05.888318 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.888328 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-02 00:55:05.888337 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-02 00:55:05.888347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-02 00:55:05.888357 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.888366 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-02 00:55:05.888376 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-02 00:55:05.888399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-02 00:55:05.888409 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.888419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-02 00:55:05.888477 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-02 00:55:05.888487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-02 00:55:05.888496 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.888587 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-02 00:55:05.888599 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-02 00:55:05.888609 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-02 00:55:05.888619 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.888628 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-02 00:55:05.888638 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-02 00:55:05.888648 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-02 00:55:05.888657 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.888667 | orchestrator | 2025-09-02 00:55:05.888685 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-02 00:55:05.888695 | orchestrator | Tuesday 02 September 2025 00:44:15 +0000 (0:00:01.250) 0:00:50.642 ***** 2025-09-02 00:55:05.888705 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.888715 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.888724 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.888734 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.888744 | orchestrator | 2025-09-02 00:55:05.888754 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-02 00:55:05.888764 | orchestrator | Tuesday 02 September 2025 00:44:16 +0000 (0:00:01.463) 0:00:52.106 ***** 2025-09-02 00:55:05.888774 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.888782 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.888790 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.888798 | orchestrator | 2025-09-02 00:55:05.888806 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-02 00:55:05.888814 | orchestrator | Tuesday 02 September 2025 00:44:17 +0000 (0:00:00.980) 0:00:53.086 ***** 2025-09-02 00:55:05.888822 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.888829 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.888837 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.888845 | orchestrator | 2025-09-02 00:55:05.888853 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-02 00:55:05.888881 | orchestrator | Tuesday 02 September 2025 00:44:18 +0000 (0:00:00.370) 0:00:53.456 ***** 2025-09-02 00:55:05.888892 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.888901 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.888909 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.888918 | orchestrator | 2025-09-02 00:55:05.888961 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-02 00:55:05.888972 | orchestrator | Tuesday 02 September 2025 00:44:18 +0000 (0:00:00.528) 0:00:53.985 ***** 2025-09-02 00:55:05.888981 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.888990 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.888998 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.889007 | orchestrator | 2025-09-02 00:55:05.889016 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-02 00:55:05.889025 | orchestrator | Tuesday 02 September 2025 00:44:19 +0000 (0:00:01.084) 0:00:55.070 ***** 2025-09-02 00:55:05.889033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.889042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.889050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.889059 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.889068 | orchestrator | 2025-09-02 00:55:05.889076 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-02 00:55:05.889085 | orchestrator | Tuesday 02 September 2025 00:44:20 +0000 (0:00:00.557) 0:00:55.627 ***** 2025-09-02 00:55:05.889093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.889102 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.889110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.889119 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.889127 | orchestrator | 2025-09-02 00:55:05.889136 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-02 00:55:05.889144 | orchestrator | Tuesday 02 September 2025 00:44:20 +0000 (0:00:00.521) 0:00:56.148 ***** 2025-09-02 00:55:05.889153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.889161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.889170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.889192 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.889202 | orchestrator | 2025-09-02 00:55:05.889210 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-02 00:55:05.889219 | orchestrator | Tuesday 02 September 2025 00:44:21 +0000 (0:00:00.790) 0:00:56.939 ***** 2025-09-02 00:55:05.889228 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.889237 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.889250 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.889300 | orchestrator | 2025-09-02 00:55:05.889319 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-02 00:55:05.889334 | orchestrator | Tuesday 02 September 2025 00:44:22 +0000 (0:00:00.663) 0:00:57.603 ***** 2025-09-02 00:55:05.889345 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-02 00:55:05.889357 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-02 00:55:05.889369 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-02 00:55:05.889382 | orchestrator | 2025-09-02 00:55:05.889403 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-02 00:55:05.889416 | orchestrator | Tuesday 02 September 2025 00:44:24 +0000 (0:00:01.880) 0:00:59.484 ***** 2025-09-02 00:55:05.889449 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-02 00:55:05.889462 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:55:05.889474 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:55:05.889545 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-02 00:55:05.889566 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-02 00:55:05.889581 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-02 00:55:05.889595 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-02 00:55:05.889610 | orchestrator | 2025-09-02 00:55:05.889625 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-02 00:55:05.889639 | orchestrator | Tuesday 02 September 2025 00:44:25 +0000 (0:00:01.007) 0:01:00.492 ***** 2025-09-02 00:55:05.889654 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-02 00:55:05.889668 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:55:05.889681 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:55:05.889696 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-02 00:55:05.889710 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-02 00:55:05.889722 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-02 00:55:05.889736 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-02 00:55:05.889751 | orchestrator | 2025-09-02 00:55:05.889764 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-02 00:55:05.889777 | orchestrator | Tuesday 02 September 2025 00:44:27 +0000 (0:00:02.634) 0:01:03.126 ***** 2025-09-02 00:55:05.889790 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.889805 | orchestrator | 2025-09-02 00:55:05.889818 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-02 00:55:05.889832 | orchestrator | Tuesday 02 September 2025 00:44:29 +0000 (0:00:01.181) 0:01:04.308 ***** 2025-09-02 00:55:05.889854 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.889868 | orchestrator | 2025-09-02 00:55:05.889882 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-02 00:55:05.889908 | orchestrator | Tuesday 02 September 2025 00:44:30 +0000 (0:00:01.284) 0:01:05.592 ***** 2025-09-02 00:55:05.889923 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.889937 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.889950 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.889964 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.889979 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.889993 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.890007 | orchestrator | 2025-09-02 00:55:05.890090 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-02 00:55:05.890117 | orchestrator | Tuesday 02 September 2025 00:44:31 +0000 (0:00:01.322) 0:01:06.915 ***** 2025-09-02 00:55:05.890145 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.890158 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.890171 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.890186 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.890200 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.890213 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.890226 | orchestrator | 2025-09-02 00:55:05.890240 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-02 00:55:05.890254 | orchestrator | Tuesday 02 September 2025 00:44:32 +0000 (0:00:01.159) 0:01:08.075 ***** 2025-09-02 00:55:05.890268 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.890282 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.890295 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.890308 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.890321 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.890334 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.890346 | orchestrator | 2025-09-02 00:55:05.890359 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-02 00:55:05.890371 | orchestrator | Tuesday 02 September 2025 00:44:35 +0000 (0:00:02.376) 0:01:10.452 ***** 2025-09-02 00:55:05.890384 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.890397 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.890410 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.890443 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.890459 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.890472 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.890486 | orchestrator | 2025-09-02 00:55:05.890499 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-02 00:55:05.890513 | orchestrator | Tuesday 02 September 2025 00:44:36 +0000 (0:00:01.541) 0:01:11.993 ***** 2025-09-02 00:55:05.890526 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.890539 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.890550 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.890561 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.890573 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.890585 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.890597 | orchestrator | 2025-09-02 00:55:05.890610 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-02 00:55:05.890648 | orchestrator | Tuesday 02 September 2025 00:44:38 +0000 (0:00:01.677) 0:01:13.671 ***** 2025-09-02 00:55:05.890664 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.890679 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.890692 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.890706 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.890720 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.890734 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.890746 | orchestrator | 2025-09-02 00:55:05.890760 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-02 00:55:05.890772 | orchestrator | Tuesday 02 September 2025 00:44:39 +0000 (0:00:01.223) 0:01:14.895 ***** 2025-09-02 00:55:05.890786 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.890810 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.890823 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.890837 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.890849 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.890863 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.890876 | orchestrator | 2025-09-02 00:55:05.890889 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-02 00:55:05.890902 | orchestrator | Tuesday 02 September 2025 00:44:40 +0000 (0:00:00.801) 0:01:15.696 ***** 2025-09-02 00:55:05.890916 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.890930 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.890944 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.890958 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.890971 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.890985 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.890999 | orchestrator | 2025-09-02 00:55:05.891013 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-02 00:55:05.891026 | orchestrator | Tuesday 02 September 2025 00:44:42 +0000 (0:00:01.769) 0:01:17.466 ***** 2025-09-02 00:55:05.891040 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.891053 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.891067 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.891080 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.891094 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.891105 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.891113 | orchestrator | 2025-09-02 00:55:05.891121 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-02 00:55:05.891129 | orchestrator | Tuesday 02 September 2025 00:44:43 +0000 (0:00:01.229) 0:01:18.695 ***** 2025-09-02 00:55:05.891137 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.891145 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.891153 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.891161 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.891169 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.891176 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.891184 | orchestrator | 2025-09-02 00:55:05.891192 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-02 00:55:05.891200 | orchestrator | Tuesday 02 September 2025 00:44:44 +0000 (0:00:00.834) 0:01:19.529 ***** 2025-09-02 00:55:05.891208 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.891216 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.891224 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.891238 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.891246 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.891254 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.891262 | orchestrator | 2025-09-02 00:55:05.891270 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-02 00:55:05.891278 | orchestrator | Tuesday 02 September 2025 00:44:45 +0000 (0:00:00.890) 0:01:20.420 ***** 2025-09-02 00:55:05.891286 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.891294 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.891301 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.891309 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.891317 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.891325 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.891333 | orchestrator | 2025-09-02 00:55:05.891341 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-02 00:55:05.891349 | orchestrator | Tuesday 02 September 2025 00:44:46 +0000 (0:00:00.897) 0:01:21.317 ***** 2025-09-02 00:55:05.891357 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.891364 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.891372 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.891380 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.891388 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.891396 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.891411 | orchestrator | 2025-09-02 00:55:05.891419 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-02 00:55:05.891473 | orchestrator | Tuesday 02 September 2025 00:44:46 +0000 (0:00:00.688) 0:01:22.006 ***** 2025-09-02 00:55:05.891482 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.891490 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.891498 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.891505 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.891513 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.891521 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.891529 | orchestrator | 2025-09-02 00:55:05.891537 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-02 00:55:05.891545 | orchestrator | Tuesday 02 September 2025 00:44:47 +0000 (0:00:00.923) 0:01:22.930 ***** 2025-09-02 00:55:05.891553 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.891561 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.891569 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.891576 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.891584 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.891592 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.891600 | orchestrator | 2025-09-02 00:55:05.891608 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-02 00:55:05.891616 | orchestrator | Tuesday 02 September 2025 00:44:48 +0000 (0:00:00.817) 0:01:23.747 ***** 2025-09-02 00:55:05.891624 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.891631 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.891639 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.891647 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.891655 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.891663 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.891671 | orchestrator | 2025-09-02 00:55:05.891688 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-02 00:55:05.891697 | orchestrator | Tuesday 02 September 2025 00:44:49 +0000 (0:00:01.290) 0:01:25.038 ***** 2025-09-02 00:55:05.891705 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.891713 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.891721 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.891729 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.891737 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.891744 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.891752 | orchestrator | 2025-09-02 00:55:05.891760 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-02 00:55:05.891768 | orchestrator | Tuesday 02 September 2025 00:44:50 +0000 (0:00:00.663) 0:01:25.701 ***** 2025-09-02 00:55:05.891776 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.891784 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.891792 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.891800 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.891808 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.891816 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.891823 | orchestrator | 2025-09-02 00:55:05.891831 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-02 00:55:05.891839 | orchestrator | Tuesday 02 September 2025 00:44:51 +0000 (0:00:01.028) 0:01:26.730 ***** 2025-09-02 00:55:05.891847 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.891855 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.891863 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.891871 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.891879 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.891886 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.891894 | orchestrator | 2025-09-02 00:55:05.891902 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-02 00:55:05.891910 | orchestrator | Tuesday 02 September 2025 00:44:52 +0000 (0:00:01.190) 0:01:27.921 ***** 2025-09-02 00:55:05.891918 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.891931 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.891939 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.891947 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.891955 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.891963 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.891970 | orchestrator | 2025-09-02 00:55:05.891977 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-02 00:55:05.891983 | orchestrator | Tuesday 02 September 2025 00:44:54 +0000 (0:00:01.456) 0:01:29.377 ***** 2025-09-02 00:55:05.891990 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.891997 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.892004 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.892010 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.892017 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.892023 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.892030 | orchestrator | 2025-09-02 00:55:05.892037 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-02 00:55:05.892043 | orchestrator | Tuesday 02 September 2025 00:44:56 +0000 (0:00:02.199) 0:01:31.577 ***** 2025-09-02 00:55:05.892057 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.892065 | orchestrator | 2025-09-02 00:55:05.892072 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-02 00:55:05.892078 | orchestrator | Tuesday 02 September 2025 00:44:57 +0000 (0:00:01.132) 0:01:32.709 ***** 2025-09-02 00:55:05.892085 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.892092 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.892098 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.892105 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.892112 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.892118 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.892125 | orchestrator | 2025-09-02 00:55:05.892132 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-02 00:55:05.892138 | orchestrator | Tuesday 02 September 2025 00:44:58 +0000 (0:00:00.653) 0:01:33.363 ***** 2025-09-02 00:55:05.892145 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.892152 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.892158 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.892165 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.892172 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.892178 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.892185 | orchestrator | 2025-09-02 00:55:05.892192 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-02 00:55:05.892198 | orchestrator | Tuesday 02 September 2025 00:44:59 +0000 (0:00:00.945) 0:01:34.309 ***** 2025-09-02 00:55:05.892205 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-02 00:55:05.892212 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-02 00:55:05.892218 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-02 00:55:05.892225 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-02 00:55:05.892232 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-02 00:55:05.892238 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-02 00:55:05.892245 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-02 00:55:05.892252 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-02 00:55:05.892259 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-02 00:55:05.892265 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-02 00:55:05.892278 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-02 00:55:05.892289 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-02 00:55:05.892296 | orchestrator | 2025-09-02 00:55:05.892302 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-02 00:55:05.892309 | orchestrator | Tuesday 02 September 2025 00:45:00 +0000 (0:00:01.445) 0:01:35.754 ***** 2025-09-02 00:55:05.892316 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.892322 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.892329 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.892336 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.892343 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.892349 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.892356 | orchestrator | 2025-09-02 00:55:05.892363 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-02 00:55:05.892369 | orchestrator | Tuesday 02 September 2025 00:45:01 +0000 (0:00:01.266) 0:01:37.021 ***** 2025-09-02 00:55:05.892376 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.892383 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.892390 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.892397 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.892403 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.892410 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.892417 | orchestrator | 2025-09-02 00:55:05.892433 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-02 00:55:05.892440 | orchestrator | Tuesday 02 September 2025 00:45:02 +0000 (0:00:00.611) 0:01:37.632 ***** 2025-09-02 00:55:05.892447 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.892453 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.892460 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.892467 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.892473 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.892480 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.892487 | orchestrator | 2025-09-02 00:55:05.892493 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-02 00:55:05.892500 | orchestrator | Tuesday 02 September 2025 00:45:03 +0000 (0:00:00.806) 0:01:38.439 ***** 2025-09-02 00:55:05.892507 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.892513 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.892520 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.892527 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.892533 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.892540 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.892546 | orchestrator | 2025-09-02 00:55:05.892553 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-02 00:55:05.892560 | orchestrator | Tuesday 02 September 2025 00:45:03 +0000 (0:00:00.612) 0:01:39.051 ***** 2025-09-02 00:55:05.892567 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.892574 | orchestrator | 2025-09-02 00:55:05.892581 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-02 00:55:05.892591 | orchestrator | Tuesday 02 September 2025 00:45:05 +0000 (0:00:01.282) 0:01:40.333 ***** 2025-09-02 00:55:05.892597 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.892604 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.892611 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.892618 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.892624 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.892631 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.892638 | orchestrator | 2025-09-02 00:55:05.892645 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-02 00:55:05.892656 | orchestrator | Tuesday 02 September 2025 00:46:10 +0000 (0:01:05.718) 0:02:46.052 ***** 2025-09-02 00:55:05.892662 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-02 00:55:05.892669 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-02 00:55:05.892676 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-02 00:55:05.892682 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.892689 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-02 00:55:05.892696 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-02 00:55:05.892703 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-02 00:55:05.892709 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.892716 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-02 00:55:05.892723 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-02 00:55:05.892729 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-02 00:55:05.892736 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.892743 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-02 00:55:05.892749 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-02 00:55:05.892756 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-02 00:55:05.892763 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.892769 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-02 00:55:05.892776 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-02 00:55:05.892783 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-02 00:55:05.892789 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.892796 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-02 00:55:05.892807 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-02 00:55:05.892814 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-02 00:55:05.892821 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.892827 | orchestrator | 2025-09-02 00:55:05.892834 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-02 00:55:05.892841 | orchestrator | Tuesday 02 September 2025 00:46:11 +0000 (0:00:00.774) 0:02:46.826 ***** 2025-09-02 00:55:05.892848 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.892854 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.892861 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.892868 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.892874 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.892881 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.892888 | orchestrator | 2025-09-02 00:55:05.892895 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-02 00:55:05.892901 | orchestrator | Tuesday 02 September 2025 00:46:12 +0000 (0:00:00.958) 0:02:47.784 ***** 2025-09-02 00:55:05.892908 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.892915 | orchestrator | 2025-09-02 00:55:05.892921 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-02 00:55:05.892928 | orchestrator | Tuesday 02 September 2025 00:46:12 +0000 (0:00:00.156) 0:02:47.941 ***** 2025-09-02 00:55:05.892935 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.892942 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.892948 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.892955 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.892966 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.892972 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.892979 | orchestrator | 2025-09-02 00:55:05.892986 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-02 00:55:05.892993 | orchestrator | Tuesday 02 September 2025 00:46:13 +0000 (0:00:00.969) 0:02:48.910 ***** 2025-09-02 00:55:05.893000 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893006 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893013 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893019 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893026 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893033 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893039 | orchestrator | 2025-09-02 00:55:05.893046 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-02 00:55:05.893053 | orchestrator | Tuesday 02 September 2025 00:46:14 +0000 (0:00:01.104) 0:02:50.015 ***** 2025-09-02 00:55:05.893060 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893066 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893073 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893079 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893086 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893092 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893099 | orchestrator | 2025-09-02 00:55:05.893106 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-02 00:55:05.893113 | orchestrator | Tuesday 02 September 2025 00:46:15 +0000 (0:00:01.103) 0:02:51.118 ***** 2025-09-02 00:55:05.893120 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.893129 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.893136 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.893143 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.893150 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.893156 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.893163 | orchestrator | 2025-09-02 00:55:05.893170 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-02 00:55:05.893176 | orchestrator | Tuesday 02 September 2025 00:46:19 +0000 (0:00:03.266) 0:02:54.385 ***** 2025-09-02 00:55:05.893183 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.893190 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.893196 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.893203 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.893209 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.893216 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.893223 | orchestrator | 2025-09-02 00:55:05.893229 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-02 00:55:05.893236 | orchestrator | Tuesday 02 September 2025 00:46:19 +0000 (0:00:00.709) 0:02:55.095 ***** 2025-09-02 00:55:05.893243 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.893250 | orchestrator | 2025-09-02 00:55:05.893257 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-02 00:55:05.893264 | orchestrator | Tuesday 02 September 2025 00:46:21 +0000 (0:00:01.395) 0:02:56.491 ***** 2025-09-02 00:55:05.893271 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893277 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893284 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893291 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893297 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893304 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893311 | orchestrator | 2025-09-02 00:55:05.893317 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-02 00:55:05.893324 | orchestrator | Tuesday 02 September 2025 00:46:22 +0000 (0:00:01.046) 0:02:57.537 ***** 2025-09-02 00:55:05.893331 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893342 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893349 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893356 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893362 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893369 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893376 | orchestrator | 2025-09-02 00:55:05.893382 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-02 00:55:05.893389 | orchestrator | Tuesday 02 September 2025 00:46:22 +0000 (0:00:00.751) 0:02:58.288 ***** 2025-09-02 00:55:05.893396 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893403 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893409 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893416 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893436 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893447 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893454 | orchestrator | 2025-09-02 00:55:05.893461 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-02 00:55:05.893468 | orchestrator | Tuesday 02 September 2025 00:46:24 +0000 (0:00:01.170) 0:02:59.459 ***** 2025-09-02 00:55:05.893474 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893481 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893488 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893494 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893501 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893508 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893515 | orchestrator | 2025-09-02 00:55:05.893521 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-02 00:55:05.893528 | orchestrator | Tuesday 02 September 2025 00:46:25 +0000 (0:00:01.081) 0:03:00.541 ***** 2025-09-02 00:55:05.893535 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893541 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893548 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893555 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893561 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893568 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893575 | orchestrator | 2025-09-02 00:55:05.893581 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-02 00:55:05.893588 | orchestrator | Tuesday 02 September 2025 00:46:26 +0000 (0:00:00.877) 0:03:01.419 ***** 2025-09-02 00:55:05.893595 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893601 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893608 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893615 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893621 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893628 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893634 | orchestrator | 2025-09-02 00:55:05.893641 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-02 00:55:05.893648 | orchestrator | Tuesday 02 September 2025 00:46:27 +0000 (0:00:01.097) 0:03:02.516 ***** 2025-09-02 00:55:05.893655 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893661 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893668 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893674 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893681 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893688 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893694 | orchestrator | 2025-09-02 00:55:05.893701 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-02 00:55:05.893708 | orchestrator | Tuesday 02 September 2025 00:46:28 +0000 (0:00:00.850) 0:03:03.366 ***** 2025-09-02 00:55:05.893715 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.893721 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.893728 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.893734 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.893748 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.893754 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.893761 | orchestrator | 2025-09-02 00:55:05.893768 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-02 00:55:05.893777 | orchestrator | Tuesday 02 September 2025 00:46:29 +0000 (0:00:01.000) 0:03:04.366 ***** 2025-09-02 00:55:05.893785 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.893791 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.893798 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.893805 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.893812 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.893818 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.893825 | orchestrator | 2025-09-02 00:55:05.893832 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-02 00:55:05.893838 | orchestrator | Tuesday 02 September 2025 00:46:30 +0000 (0:00:01.452) 0:03:05.818 ***** 2025-09-02 00:55:05.893845 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.893852 | orchestrator | 2025-09-02 00:55:05.893859 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-02 00:55:05.893866 | orchestrator | Tuesday 02 September 2025 00:46:32 +0000 (0:00:01.488) 0:03:07.306 ***** 2025-09-02 00:55:05.893872 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-02 00:55:05.893879 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-02 00:55:05.893886 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-02 00:55:05.893893 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-02 00:55:05.893899 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-02 00:55:05.893906 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-02 00:55:05.893913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-02 00:55:05.893919 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-02 00:55:05.893926 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-02 00:55:05.893933 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-02 00:55:05.893940 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-02 00:55:05.893946 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-02 00:55:05.893953 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-02 00:55:05.893960 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-02 00:55:05.893967 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-02 00:55:05.893973 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-02 00:55:05.893980 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-02 00:55:05.893987 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-02 00:55:05.893994 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-02 00:55:05.894000 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-02 00:55:05.894011 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-02 00:55:05.894041 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-02 00:55:05.894048 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-02 00:55:05.894055 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-02 00:55:05.894061 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-02 00:55:05.894068 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-02 00:55:05.894075 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-02 00:55:05.894081 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-02 00:55:05.894088 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-02 00:55:05.894100 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-02 00:55:05.894107 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-02 00:55:05.894114 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-02 00:55:05.894120 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-02 00:55:05.894127 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-02 00:55:05.894134 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-02 00:55:05.894140 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-02 00:55:05.894147 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-02 00:55:05.894154 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-02 00:55:05.894160 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-02 00:55:05.894167 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-02 00:55:05.894174 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-02 00:55:05.894180 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-02 00:55:05.894187 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-02 00:55:05.894194 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-02 00:55:05.894200 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-02 00:55:05.894207 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-02 00:55:05.894214 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-02 00:55:05.894220 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-02 00:55:05.894227 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-02 00:55:05.894234 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-02 00:55:05.894241 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-02 00:55:05.894250 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-02 00:55:05.894257 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-02 00:55:05.894264 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-02 00:55:05.894271 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-02 00:55:05.894277 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-02 00:55:05.894284 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-02 00:55:05.894291 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-02 00:55:05.894297 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-02 00:55:05.894304 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-02 00:55:05.894310 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-02 00:55:05.894317 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-02 00:55:05.894324 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-02 00:55:05.894330 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-02 00:55:05.894337 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-02 00:55:05.894344 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-02 00:55:05.894350 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-02 00:55:05.894357 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-02 00:55:05.894363 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-02 00:55:05.894370 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-02 00:55:05.894381 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-02 00:55:05.894388 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-02 00:55:05.894394 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-02 00:55:05.894401 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-02 00:55:05.894408 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-02 00:55:05.894414 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-02 00:55:05.894430 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-02 00:55:05.894449 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-02 00:55:05.894456 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-02 00:55:05.894463 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-02 00:55:05.894470 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-02 00:55:05.894476 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-02 00:55:05.894483 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-02 00:55:05.894490 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-02 00:55:05.894497 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-02 00:55:05.894503 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-02 00:55:05.894510 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-02 00:55:05.894517 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-02 00:55:05.894523 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-02 00:55:05.894530 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-02 00:55:05.894537 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-02 00:55:05.894543 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-02 00:55:05.894550 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-02 00:55:05.894557 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-02 00:55:05.894564 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-02 00:55:05.894570 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-02 00:55:05.894577 | orchestrator | 2025-09-02 00:55:05.894584 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-02 00:55:05.894590 | orchestrator | Tuesday 02 September 2025 00:46:39 +0000 (0:00:07.134) 0:03:14.441 ***** 2025-09-02 00:55:05.894597 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.894604 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.894610 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.894617 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.894624 | orchestrator | 2025-09-02 00:55:05.894631 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-02 00:55:05.894637 | orchestrator | Tuesday 02 September 2025 00:46:40 +0000 (0:00:01.441) 0:03:15.882 ***** 2025-09-02 00:55:05.894644 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.894651 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.894661 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.894668 | orchestrator | 2025-09-02 00:55:05.894675 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-02 00:55:05.894686 | orchestrator | Tuesday 02 September 2025 00:46:41 +0000 (0:00:00.765) 0:03:16.648 ***** 2025-09-02 00:55:05.894692 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.894699 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.894706 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.894713 | orchestrator | 2025-09-02 00:55:05.894720 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-02 00:55:05.894727 | orchestrator | Tuesday 02 September 2025 00:46:42 +0000 (0:00:01.579) 0:03:18.228 ***** 2025-09-02 00:55:05.894733 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.894740 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.894747 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.894753 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.894760 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.894767 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.894773 | orchestrator | 2025-09-02 00:55:05.894780 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-02 00:55:05.894787 | orchestrator | Tuesday 02 September 2025 00:46:43 +0000 (0:00:00.774) 0:03:19.002 ***** 2025-09-02 00:55:05.894793 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.894800 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.894807 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.894813 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.894820 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.894827 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.894833 | orchestrator | 2025-09-02 00:55:05.894840 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-02 00:55:05.894847 | orchestrator | Tuesday 02 September 2025 00:46:44 +0000 (0:00:01.031) 0:03:20.034 ***** 2025-09-02 00:55:05.894854 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.894860 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.894867 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.894873 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.894880 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.894887 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.894893 | orchestrator | 2025-09-02 00:55:05.894900 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-02 00:55:05.894907 | orchestrator | Tuesday 02 September 2025 00:46:45 +0000 (0:00:00.829) 0:03:20.864 ***** 2025-09-02 00:55:05.894918 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.894925 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.894931 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.894938 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.894945 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.894951 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.894958 | orchestrator | 2025-09-02 00:55:05.894965 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-02 00:55:05.894971 | orchestrator | Tuesday 02 September 2025 00:46:46 +0000 (0:00:00.692) 0:03:21.556 ***** 2025-09-02 00:55:05.894978 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.894985 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.894991 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.894998 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895005 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895011 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895018 | orchestrator | 2025-09-02 00:55:05.895025 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-02 00:55:05.895032 | orchestrator | Tuesday 02 September 2025 00:46:47 +0000 (0:00:00.752) 0:03:22.309 ***** 2025-09-02 00:55:05.895042 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895049 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895056 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895062 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895069 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895076 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895082 | orchestrator | 2025-09-02 00:55:05.895089 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-02 00:55:05.895096 | orchestrator | Tuesday 02 September 2025 00:46:47 +0000 (0:00:00.645) 0:03:22.954 ***** 2025-09-02 00:55:05.895103 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895109 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895116 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895123 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895129 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895136 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895142 | orchestrator | 2025-09-02 00:55:05.895149 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-02 00:55:05.895156 | orchestrator | Tuesday 02 September 2025 00:46:48 +0000 (0:00:00.733) 0:03:23.688 ***** 2025-09-02 00:55:05.895163 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895169 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895176 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895182 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895189 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895196 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895202 | orchestrator | 2025-09-02 00:55:05.895209 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-02 00:55:05.895216 | orchestrator | Tuesday 02 September 2025 00:46:48 +0000 (0:00:00.509) 0:03:24.198 ***** 2025-09-02 00:55:05.895222 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895235 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895242 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895248 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.895255 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.895262 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.895268 | orchestrator | 2025-09-02 00:55:05.895275 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-02 00:55:05.895282 | orchestrator | Tuesday 02 September 2025 00:46:51 +0000 (0:00:02.546) 0:03:26.744 ***** 2025-09-02 00:55:05.895289 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.895296 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.895302 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.895309 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895316 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895322 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895329 | orchestrator | 2025-09-02 00:55:05.895336 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-02 00:55:05.895342 | orchestrator | Tuesday 02 September 2025 00:46:52 +0000 (0:00:00.686) 0:03:27.430 ***** 2025-09-02 00:55:05.895349 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.895356 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.895363 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.895369 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895376 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895382 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895389 | orchestrator | 2025-09-02 00:55:05.895396 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-02 00:55:05.895403 | orchestrator | Tuesday 02 September 2025 00:46:53 +0000 (0:00:01.214) 0:03:28.644 ***** 2025-09-02 00:55:05.895409 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895416 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895437 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895444 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895450 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895457 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895464 | orchestrator | 2025-09-02 00:55:05.895470 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-02 00:55:05.895477 | orchestrator | Tuesday 02 September 2025 00:46:54 +0000 (0:00:00.949) 0:03:29.594 ***** 2025-09-02 00:55:05.895484 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.895491 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.895498 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.895505 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895511 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895518 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895525 | orchestrator | 2025-09-02 00:55:05.895535 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-02 00:55:05.895542 | orchestrator | Tuesday 02 September 2025 00:46:55 +0000 (0:00:01.035) 0:03:30.630 ***** 2025-09-02 00:55:05.895550 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-02 00:55:05.895559 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-02 00:55:05.895566 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895573 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-02 00:55:05.895581 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-02 00:55:05.895588 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-02 00:55:05.895594 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-02 00:55:05.895604 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895611 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895618 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895624 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895631 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895638 | orchestrator | 2025-09-02 00:55:05.895644 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-02 00:55:05.895655 | orchestrator | Tuesday 02 September 2025 00:46:56 +0000 (0:00:01.481) 0:03:32.111 ***** 2025-09-02 00:55:05.895662 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895668 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895675 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895682 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895688 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895695 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895702 | orchestrator | 2025-09-02 00:55:05.895708 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-02 00:55:05.895715 | orchestrator | Tuesday 02 September 2025 00:46:57 +0000 (0:00:00.910) 0:03:33.021 ***** 2025-09-02 00:55:05.895722 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895728 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895735 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895741 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895748 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895755 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895761 | orchestrator | 2025-09-02 00:55:05.895768 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-02 00:55:05.895775 | orchestrator | Tuesday 02 September 2025 00:46:58 +0000 (0:00:01.058) 0:03:34.080 ***** 2025-09-02 00:55:05.895782 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895788 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895795 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895802 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895808 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895815 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895822 | orchestrator | 2025-09-02 00:55:05.895828 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-02 00:55:05.895835 | orchestrator | Tuesday 02 September 2025 00:47:00 +0000 (0:00:01.750) 0:03:35.831 ***** 2025-09-02 00:55:05.895842 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895848 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895855 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895862 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895868 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895875 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895882 | orchestrator | 2025-09-02 00:55:05.895888 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-02 00:55:05.895895 | orchestrator | Tuesday 02 September 2025 00:47:01 +0000 (0:00:01.310) 0:03:37.141 ***** 2025-09-02 00:55:05.895902 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.895912 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.895919 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.895926 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895932 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895939 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.895946 | orchestrator | 2025-09-02 00:55:05.895952 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-02 00:55:05.895959 | orchestrator | Tuesday 02 September 2025 00:47:03 +0000 (0:00:01.402) 0:03:38.543 ***** 2025-09-02 00:55:05.895966 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.895972 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.895979 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.895986 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.895992 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.895999 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.896006 | orchestrator | 2025-09-02 00:55:05.896012 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-02 00:55:05.896019 | orchestrator | Tuesday 02 September 2025 00:47:04 +0000 (0:00:01.608) 0:03:40.152 ***** 2025-09-02 00:55:05.896026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.896036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.896043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.896049 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896056 | orchestrator | 2025-09-02 00:55:05.896063 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-02 00:55:05.896069 | orchestrator | Tuesday 02 September 2025 00:47:05 +0000 (0:00:00.781) 0:03:40.934 ***** 2025-09-02 00:55:05.896076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.896082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.896089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.896096 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896102 | orchestrator | 2025-09-02 00:55:05.896109 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-02 00:55:05.896116 | orchestrator | Tuesday 02 September 2025 00:47:06 +0000 (0:00:00.527) 0:03:41.461 ***** 2025-09-02 00:55:05.896123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.896129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.896136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.896143 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896149 | orchestrator | 2025-09-02 00:55:05.896156 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-02 00:55:05.896163 | orchestrator | Tuesday 02 September 2025 00:47:06 +0000 (0:00:00.670) 0:03:42.132 ***** 2025-09-02 00:55:05.896169 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.896176 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.896183 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.896189 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.896196 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.896205 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.896212 | orchestrator | 2025-09-02 00:55:05.896219 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-02 00:55:05.896226 | orchestrator | Tuesday 02 September 2025 00:47:07 +0000 (0:00:00.743) 0:03:42.876 ***** 2025-09-02 00:55:05.896232 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-02 00:55:05.896239 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-02 00:55:05.896246 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-02 00:55:05.896252 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-02 00:55:05.896259 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-02 00:55:05.896266 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.896273 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.896279 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-02 00:55:05.896286 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.896293 | orchestrator | 2025-09-02 00:55:05.896299 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-02 00:55:05.896306 | orchestrator | Tuesday 02 September 2025 00:47:10 +0000 (0:00:02.618) 0:03:45.494 ***** 2025-09-02 00:55:05.896313 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.896319 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.896326 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.896333 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.896339 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.896346 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.896353 | orchestrator | 2025-09-02 00:55:05.896359 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-02 00:55:05.896366 | orchestrator | Tuesday 02 September 2025 00:47:14 +0000 (0:00:03.909) 0:03:49.403 ***** 2025-09-02 00:55:05.896373 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.896379 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.896386 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.896396 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.896403 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.896409 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.896416 | orchestrator | 2025-09-02 00:55:05.896451 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-02 00:55:05.896459 | orchestrator | Tuesday 02 September 2025 00:47:16 +0000 (0:00:02.220) 0:03:51.624 ***** 2025-09-02 00:55:05.896465 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896472 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.896479 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.896485 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.896492 | orchestrator | 2025-09-02 00:55:05.896499 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-02 00:55:05.896506 | orchestrator | Tuesday 02 September 2025 00:47:17 +0000 (0:00:01.352) 0:03:52.976 ***** 2025-09-02 00:55:05.896513 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.896519 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.896526 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.896533 | orchestrator | 2025-09-02 00:55:05.896544 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-02 00:55:05.896551 | orchestrator | Tuesday 02 September 2025 00:47:18 +0000 (0:00:00.511) 0:03:53.488 ***** 2025-09-02 00:55:05.896557 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.896564 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.896571 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.896577 | orchestrator | 2025-09-02 00:55:05.896584 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-02 00:55:05.896591 | orchestrator | Tuesday 02 September 2025 00:47:19 +0000 (0:00:01.628) 0:03:55.117 ***** 2025-09-02 00:55:05.896597 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-02 00:55:05.896603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-02 00:55:05.896609 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-02 00:55:05.896616 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.896622 | orchestrator | 2025-09-02 00:55:05.896628 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-02 00:55:05.896634 | orchestrator | Tuesday 02 September 2025 00:47:20 +0000 (0:00:00.923) 0:03:56.041 ***** 2025-09-02 00:55:05.896640 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.896647 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.896653 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.896659 | orchestrator | 2025-09-02 00:55:05.896665 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-02 00:55:05.896672 | orchestrator | Tuesday 02 September 2025 00:47:21 +0000 (0:00:00.435) 0:03:56.477 ***** 2025-09-02 00:55:05.896678 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.896684 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.896690 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.896697 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.896703 | orchestrator | 2025-09-02 00:55:05.896709 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-02 00:55:05.896715 | orchestrator | Tuesday 02 September 2025 00:47:22 +0000 (0:00:01.163) 0:03:57.640 ***** 2025-09-02 00:55:05.896722 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.896728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.896734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.896740 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896746 | orchestrator | 2025-09-02 00:55:05.896753 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-02 00:55:05.896763 | orchestrator | Tuesday 02 September 2025 00:47:22 +0000 (0:00:00.377) 0:03:58.017 ***** 2025-09-02 00:55:05.896769 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896775 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.896781 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.896787 | orchestrator | 2025-09-02 00:55:05.896794 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-02 00:55:05.896803 | orchestrator | Tuesday 02 September 2025 00:47:23 +0000 (0:00:00.651) 0:03:58.669 ***** 2025-09-02 00:55:05.896809 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896816 | orchestrator | 2025-09-02 00:55:05.896822 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-02 00:55:05.896828 | orchestrator | Tuesday 02 September 2025 00:47:23 +0000 (0:00:00.225) 0:03:58.895 ***** 2025-09-02 00:55:05.896834 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896840 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.896847 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.896853 | orchestrator | 2025-09-02 00:55:05.896859 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-02 00:55:05.896865 | orchestrator | Tuesday 02 September 2025 00:47:23 +0000 (0:00:00.353) 0:03:59.248 ***** 2025-09-02 00:55:05.896871 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896877 | orchestrator | 2025-09-02 00:55:05.896884 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-02 00:55:05.896890 | orchestrator | Tuesday 02 September 2025 00:47:24 +0000 (0:00:00.224) 0:03:59.472 ***** 2025-09-02 00:55:05.896896 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896902 | orchestrator | 2025-09-02 00:55:05.896909 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-02 00:55:05.896915 | orchestrator | Tuesday 02 September 2025 00:47:24 +0000 (0:00:00.234) 0:03:59.706 ***** 2025-09-02 00:55:05.896921 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896927 | orchestrator | 2025-09-02 00:55:05.896933 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-02 00:55:05.896940 | orchestrator | Tuesday 02 September 2025 00:47:24 +0000 (0:00:00.113) 0:03:59.820 ***** 2025-09-02 00:55:05.896946 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896952 | orchestrator | 2025-09-02 00:55:05.896958 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-02 00:55:05.896964 | orchestrator | Tuesday 02 September 2025 00:47:24 +0000 (0:00:00.221) 0:04:00.042 ***** 2025-09-02 00:55:05.896970 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.896977 | orchestrator | 2025-09-02 00:55:05.896983 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-02 00:55:05.896989 | orchestrator | Tuesday 02 September 2025 00:47:24 +0000 (0:00:00.244) 0:04:00.286 ***** 2025-09-02 00:55:05.896995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.897002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.897008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.897014 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.897020 | orchestrator | 2025-09-02 00:55:05.897026 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-02 00:55:05.897033 | orchestrator | Tuesday 02 September 2025 00:47:25 +0000 (0:00:00.654) 0:04:00.941 ***** 2025-09-02 00:55:05.897039 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.897049 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.897055 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.897061 | orchestrator | 2025-09-02 00:55:05.897068 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-02 00:55:05.897074 | orchestrator | Tuesday 02 September 2025 00:47:26 +0000 (0:00:00.593) 0:04:01.535 ***** 2025-09-02 00:55:05.897080 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.897086 | orchestrator | 2025-09-02 00:55:05.897092 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-02 00:55:05.897102 | orchestrator | Tuesday 02 September 2025 00:47:26 +0000 (0:00:00.233) 0:04:01.768 ***** 2025-09-02 00:55:05.897109 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.897115 | orchestrator | 2025-09-02 00:55:05.897121 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-02 00:55:05.897128 | orchestrator | Tuesday 02 September 2025 00:47:26 +0000 (0:00:00.219) 0:04:01.988 ***** 2025-09-02 00:55:05.897134 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.897140 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.897146 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.897153 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.897159 | orchestrator | 2025-09-02 00:55:05.897165 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-02 00:55:05.897171 | orchestrator | Tuesday 02 September 2025 00:47:27 +0000 (0:00:00.829) 0:04:02.817 ***** 2025-09-02 00:55:05.897178 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.897184 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.897190 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.897196 | orchestrator | 2025-09-02 00:55:05.897202 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-02 00:55:05.897209 | orchestrator | Tuesday 02 September 2025 00:47:28 +0000 (0:00:00.557) 0:04:03.374 ***** 2025-09-02 00:55:05.897215 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.897221 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.897228 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.897234 | orchestrator | 2025-09-02 00:55:05.897240 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-02 00:55:05.897246 | orchestrator | Tuesday 02 September 2025 00:47:29 +0000 (0:00:01.273) 0:04:04.648 ***** 2025-09-02 00:55:05.897253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.897259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.897265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.897271 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.897277 | orchestrator | 2025-09-02 00:55:05.897284 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-02 00:55:05.897290 | orchestrator | Tuesday 02 September 2025 00:47:29 +0000 (0:00:00.604) 0:04:05.253 ***** 2025-09-02 00:55:05.897296 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.897302 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.897309 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.897315 | orchestrator | 2025-09-02 00:55:05.897324 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-02 00:55:05.897330 | orchestrator | Tuesday 02 September 2025 00:47:30 +0000 (0:00:00.375) 0:04:05.629 ***** 2025-09-02 00:55:05.897336 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.897342 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.897349 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.897355 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.897361 | orchestrator | 2025-09-02 00:55:05.897367 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-02 00:55:05.897374 | orchestrator | Tuesday 02 September 2025 00:47:31 +0000 (0:00:01.212) 0:04:06.841 ***** 2025-09-02 00:55:05.897380 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.897386 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.897392 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.897399 | orchestrator | 2025-09-02 00:55:05.897405 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-02 00:55:05.897411 | orchestrator | Tuesday 02 September 2025 00:47:31 +0000 (0:00:00.355) 0:04:07.197 ***** 2025-09-02 00:55:05.897417 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.897437 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.897443 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.897449 | orchestrator | 2025-09-02 00:55:05.897456 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-02 00:55:05.897462 | orchestrator | Tuesday 02 September 2025 00:47:33 +0000 (0:00:01.444) 0:04:08.642 ***** 2025-09-02 00:55:05.897468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.897474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.897480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.897487 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.897493 | orchestrator | 2025-09-02 00:55:05.897499 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-02 00:55:05.897505 | orchestrator | Tuesday 02 September 2025 00:47:34 +0000 (0:00:00.763) 0:04:09.405 ***** 2025-09-02 00:55:05.897511 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.897518 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.897524 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.897530 | orchestrator | 2025-09-02 00:55:05.897536 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-02 00:55:05.897542 | orchestrator | Tuesday 02 September 2025 00:47:34 +0000 (0:00:00.486) 0:04:09.892 ***** 2025-09-02 00:55:05.897549 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.897555 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.897561 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.897567 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.897573 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.897579 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.897586 | orchestrator | 2025-09-02 00:55:05.897592 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-02 00:55:05.897602 | orchestrator | Tuesday 02 September 2025 00:47:35 +0000 (0:00:00.569) 0:04:10.461 ***** 2025-09-02 00:55:05.897609 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.897615 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.897621 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.897627 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.897634 | orchestrator | 2025-09-02 00:55:05.897640 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-02 00:55:05.897646 | orchestrator | Tuesday 02 September 2025 00:47:36 +0000 (0:00:01.226) 0:04:11.688 ***** 2025-09-02 00:55:05.897653 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.897659 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.897665 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.897671 | orchestrator | 2025-09-02 00:55:05.897677 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-02 00:55:05.897684 | orchestrator | Tuesday 02 September 2025 00:47:36 +0000 (0:00:00.330) 0:04:12.018 ***** 2025-09-02 00:55:05.897690 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.897696 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.897702 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.897708 | orchestrator | 2025-09-02 00:55:05.897715 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-02 00:55:05.897721 | orchestrator | Tuesday 02 September 2025 00:47:38 +0000 (0:00:01.562) 0:04:13.581 ***** 2025-09-02 00:55:05.897727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-02 00:55:05.897733 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-02 00:55:05.897739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-02 00:55:05.897746 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.897752 | orchestrator | 2025-09-02 00:55:05.897758 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-02 00:55:05.897764 | orchestrator | Tuesday 02 September 2025 00:47:38 +0000 (0:00:00.643) 0:04:14.225 ***** 2025-09-02 00:55:05.897775 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.897782 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.897788 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.897794 | orchestrator | 2025-09-02 00:55:05.897800 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-02 00:55:05.897807 | orchestrator | 2025-09-02 00:55:05.897813 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-02 00:55:05.897819 | orchestrator | Tuesday 02 September 2025 00:47:39 +0000 (0:00:00.611) 0:04:14.836 ***** 2025-09-02 00:55:05.897825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.897832 | orchestrator | 2025-09-02 00:55:05.897838 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-02 00:55:05.897844 | orchestrator | Tuesday 02 September 2025 00:47:40 +0000 (0:00:00.875) 0:04:15.712 ***** 2025-09-02 00:55:05.897853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.897860 | orchestrator | 2025-09-02 00:55:05.897866 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-02 00:55:05.897872 | orchestrator | Tuesday 02 September 2025 00:47:41 +0000 (0:00:00.605) 0:04:16.317 ***** 2025-09-02 00:55:05.897878 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.897885 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.897891 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.897897 | orchestrator | 2025-09-02 00:55:05.897903 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-02 00:55:05.897910 | orchestrator | Tuesday 02 September 2025 00:47:42 +0000 (0:00:01.081) 0:04:17.398 ***** 2025-09-02 00:55:05.897916 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.897922 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.897928 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.897934 | orchestrator | 2025-09-02 00:55:05.897941 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-02 00:55:05.897947 | orchestrator | Tuesday 02 September 2025 00:47:42 +0000 (0:00:00.597) 0:04:17.996 ***** 2025-09-02 00:55:05.897953 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.897959 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.897965 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.897972 | orchestrator | 2025-09-02 00:55:05.897978 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-02 00:55:05.897984 | orchestrator | Tuesday 02 September 2025 00:47:43 +0000 (0:00:00.315) 0:04:18.311 ***** 2025-09-02 00:55:05.897990 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.897997 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.898003 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.898009 | orchestrator | 2025-09-02 00:55:05.898085 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-02 00:55:05.898095 | orchestrator | Tuesday 02 September 2025 00:47:43 +0000 (0:00:00.309) 0:04:18.620 ***** 2025-09-02 00:55:05.898101 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898108 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898114 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898120 | orchestrator | 2025-09-02 00:55:05.898126 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-02 00:55:05.898133 | orchestrator | Tuesday 02 September 2025 00:47:44 +0000 (0:00:00.731) 0:04:19.351 ***** 2025-09-02 00:55:05.898139 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.898145 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.898151 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.898157 | orchestrator | 2025-09-02 00:55:05.898164 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-02 00:55:05.898170 | orchestrator | Tuesday 02 September 2025 00:47:44 +0000 (0:00:00.329) 0:04:19.680 ***** 2025-09-02 00:55:05.898181 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.898187 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.898193 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.898199 | orchestrator | 2025-09-02 00:55:05.898226 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-02 00:55:05.898234 | orchestrator | Tuesday 02 September 2025 00:47:44 +0000 (0:00:00.577) 0:04:20.258 ***** 2025-09-02 00:55:05.898240 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898246 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898253 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898259 | orchestrator | 2025-09-02 00:55:05.898265 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-02 00:55:05.898271 | orchestrator | Tuesday 02 September 2025 00:47:45 +0000 (0:00:00.803) 0:04:21.061 ***** 2025-09-02 00:55:05.898277 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898284 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898290 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898296 | orchestrator | 2025-09-02 00:55:05.898302 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-02 00:55:05.898308 | orchestrator | Tuesday 02 September 2025 00:47:46 +0000 (0:00:00.872) 0:04:21.933 ***** 2025-09-02 00:55:05.898314 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.898321 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.898327 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.898333 | orchestrator | 2025-09-02 00:55:05.898339 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-02 00:55:05.898345 | orchestrator | Tuesday 02 September 2025 00:47:46 +0000 (0:00:00.322) 0:04:22.256 ***** 2025-09-02 00:55:05.898352 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898358 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898364 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898370 | orchestrator | 2025-09-02 00:55:05.898376 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-02 00:55:05.898382 | orchestrator | Tuesday 02 September 2025 00:47:47 +0000 (0:00:00.583) 0:04:22.840 ***** 2025-09-02 00:55:05.898389 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.898395 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.898401 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.898407 | orchestrator | 2025-09-02 00:55:05.898413 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-02 00:55:05.898420 | orchestrator | Tuesday 02 September 2025 00:47:47 +0000 (0:00:00.319) 0:04:23.160 ***** 2025-09-02 00:55:05.898454 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.898460 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.898466 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.898472 | orchestrator | 2025-09-02 00:55:05.898479 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-02 00:55:05.898485 | orchestrator | Tuesday 02 September 2025 00:47:48 +0000 (0:00:00.296) 0:04:23.456 ***** 2025-09-02 00:55:05.898491 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.898497 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.898504 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.898510 | orchestrator | 2025-09-02 00:55:05.898516 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-02 00:55:05.898522 | orchestrator | Tuesday 02 September 2025 00:47:48 +0000 (0:00:00.311) 0:04:23.768 ***** 2025-09-02 00:55:05.898529 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.898541 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.898547 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.898553 | orchestrator | 2025-09-02 00:55:05.898559 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-02 00:55:05.898566 | orchestrator | Tuesday 02 September 2025 00:47:49 +0000 (0:00:00.545) 0:04:24.314 ***** 2025-09-02 00:55:05.898572 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.898582 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.898588 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.898593 | orchestrator | 2025-09-02 00:55:05.898598 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-02 00:55:05.898604 | orchestrator | Tuesday 02 September 2025 00:47:49 +0000 (0:00:00.313) 0:04:24.627 ***** 2025-09-02 00:55:05.898609 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898615 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898620 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898626 | orchestrator | 2025-09-02 00:55:05.898631 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-02 00:55:05.898637 | orchestrator | Tuesday 02 September 2025 00:47:49 +0000 (0:00:00.330) 0:04:24.957 ***** 2025-09-02 00:55:05.898642 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898647 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898653 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898658 | orchestrator | 2025-09-02 00:55:05.898664 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-02 00:55:05.898669 | orchestrator | Tuesday 02 September 2025 00:47:49 +0000 (0:00:00.327) 0:04:25.285 ***** 2025-09-02 00:55:05.898675 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898680 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898685 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898691 | orchestrator | 2025-09-02 00:55:05.898696 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-02 00:55:05.898702 | orchestrator | Tuesday 02 September 2025 00:47:50 +0000 (0:00:00.838) 0:04:26.123 ***** 2025-09-02 00:55:05.898707 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898713 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898718 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898723 | orchestrator | 2025-09-02 00:55:05.898729 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-02 00:55:05.898734 | orchestrator | Tuesday 02 September 2025 00:47:51 +0000 (0:00:00.444) 0:04:26.568 ***** 2025-09-02 00:55:05.898740 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.898745 | orchestrator | 2025-09-02 00:55:05.898751 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-02 00:55:05.898756 | orchestrator | Tuesday 02 September 2025 00:47:51 +0000 (0:00:00.601) 0:04:27.170 ***** 2025-09-02 00:55:05.898762 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.898767 | orchestrator | 2025-09-02 00:55:05.898773 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-02 00:55:05.898798 | orchestrator | Tuesday 02 September 2025 00:47:52 +0000 (0:00:00.361) 0:04:27.531 ***** 2025-09-02 00:55:05.898804 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-02 00:55:05.898810 | orchestrator | 2025-09-02 00:55:05.898815 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-02 00:55:05.898821 | orchestrator | Tuesday 02 September 2025 00:47:53 +0000 (0:00:01.043) 0:04:28.575 ***** 2025-09-02 00:55:05.898826 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898832 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898837 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898842 | orchestrator | 2025-09-02 00:55:05.898848 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-02 00:55:05.898853 | orchestrator | Tuesday 02 September 2025 00:47:53 +0000 (0:00:00.402) 0:04:28.977 ***** 2025-09-02 00:55:05.898859 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898864 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.898870 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.898875 | orchestrator | 2025-09-02 00:55:05.898880 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-02 00:55:05.898886 | orchestrator | Tuesday 02 September 2025 00:47:54 +0000 (0:00:00.363) 0:04:29.341 ***** 2025-09-02 00:55:05.898891 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.898900 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.898906 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.898911 | orchestrator | 2025-09-02 00:55:05.898916 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-02 00:55:05.898922 | orchestrator | Tuesday 02 September 2025 00:47:55 +0000 (0:00:01.254) 0:04:30.595 ***** 2025-09-02 00:55:05.898927 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.898933 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.898938 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.898944 | orchestrator | 2025-09-02 00:55:05.898949 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-02 00:55:05.898955 | orchestrator | Tuesday 02 September 2025 00:47:56 +0000 (0:00:01.096) 0:04:31.691 ***** 2025-09-02 00:55:05.898960 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.898966 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.898971 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.898976 | orchestrator | 2025-09-02 00:55:05.898982 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-02 00:55:05.898987 | orchestrator | Tuesday 02 September 2025 00:47:57 +0000 (0:00:00.688) 0:04:32.380 ***** 2025-09-02 00:55:05.898993 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.898998 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.899004 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.899009 | orchestrator | 2025-09-02 00:55:05.899015 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-02 00:55:05.899020 | orchestrator | Tuesday 02 September 2025 00:47:57 +0000 (0:00:00.660) 0:04:33.041 ***** 2025-09-02 00:55:05.899026 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.899031 | orchestrator | 2025-09-02 00:55:05.899036 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-02 00:55:05.899042 | orchestrator | Tuesday 02 September 2025 00:47:59 +0000 (0:00:01.281) 0:04:34.322 ***** 2025-09-02 00:55:05.899047 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.899053 | orchestrator | 2025-09-02 00:55:05.899061 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-02 00:55:05.899066 | orchestrator | Tuesday 02 September 2025 00:47:59 +0000 (0:00:00.709) 0:04:35.032 ***** 2025-09-02 00:55:05.899072 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-02 00:55:05.899077 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.899083 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.899088 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-02 00:55:05.899094 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-02 00:55:05.899099 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-02 00:55:05.899105 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-02 00:55:05.899110 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-02 00:55:05.899115 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-02 00:55:05.899121 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-02 00:55:05.899126 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-02 00:55:05.899132 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-02 00:55:05.899137 | orchestrator | 2025-09-02 00:55:05.899143 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-02 00:55:05.899148 | orchestrator | Tuesday 02 September 2025 00:48:03 +0000 (0:00:03.664) 0:04:38.697 ***** 2025-09-02 00:55:05.899153 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.899159 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.899164 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.899170 | orchestrator | 2025-09-02 00:55:05.899175 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-02 00:55:05.899181 | orchestrator | Tuesday 02 September 2025 00:48:05 +0000 (0:00:01.676) 0:04:40.373 ***** 2025-09-02 00:55:05.899189 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.899195 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.899200 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.899206 | orchestrator | 2025-09-02 00:55:05.899211 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-02 00:55:05.899217 | orchestrator | Tuesday 02 September 2025 00:48:05 +0000 (0:00:00.479) 0:04:40.852 ***** 2025-09-02 00:55:05.899222 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.899228 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.899233 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.899238 | orchestrator | 2025-09-02 00:55:05.899244 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-02 00:55:05.899249 | orchestrator | Tuesday 02 September 2025 00:48:06 +0000 (0:00:00.716) 0:04:41.569 ***** 2025-09-02 00:55:05.899255 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.899260 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.899266 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.899271 | orchestrator | 2025-09-02 00:55:05.899293 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-02 00:55:05.899299 | orchestrator | Tuesday 02 September 2025 00:48:08 +0000 (0:00:02.019) 0:04:43.589 ***** 2025-09-02 00:55:05.899305 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.899310 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.899316 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.899321 | orchestrator | 2025-09-02 00:55:05.899327 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-02 00:55:05.899332 | orchestrator | Tuesday 02 September 2025 00:48:10 +0000 (0:00:02.070) 0:04:45.659 ***** 2025-09-02 00:55:05.899337 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.899343 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.899348 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.899354 | orchestrator | 2025-09-02 00:55:05.899359 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-02 00:55:05.899365 | orchestrator | Tuesday 02 September 2025 00:48:10 +0000 (0:00:00.353) 0:04:46.013 ***** 2025-09-02 00:55:05.899370 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.899376 | orchestrator | 2025-09-02 00:55:05.899381 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-02 00:55:05.899386 | orchestrator | Tuesday 02 September 2025 00:48:11 +0000 (0:00:00.553) 0:04:46.567 ***** 2025-09-02 00:55:05.899392 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.899397 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.899403 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.899408 | orchestrator | 2025-09-02 00:55:05.899413 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-02 00:55:05.899419 | orchestrator | Tuesday 02 September 2025 00:48:11 +0000 (0:00:00.581) 0:04:47.148 ***** 2025-09-02 00:55:05.899434 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.899440 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.899445 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.899451 | orchestrator | 2025-09-02 00:55:05.899456 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-02 00:55:05.899462 | orchestrator | Tuesday 02 September 2025 00:48:12 +0000 (0:00:00.332) 0:04:47.481 ***** 2025-09-02 00:55:05.899467 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-02 00:55:05.899473 | orchestrator | 2025-09-02 00:55:05.899478 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-02 00:55:05.899483 | orchestrator | Tuesday 02 September 2025 00:48:12 +0000 (0:00:00.566) 0:04:48.048 ***** 2025-09-02 00:55:05.899489 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.899494 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.899503 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.899509 | orchestrator | 2025-09-02 00:55:05.899514 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-02 00:55:05.899520 | orchestrator | Tuesday 02 September 2025 00:48:14 +0000 (0:00:02.225) 0:04:50.273 ***** 2025-09-02 00:55:05.899528 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.899533 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.899539 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.899544 | orchestrator | 2025-09-02 00:55:05.899550 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-02 00:55:05.899555 | orchestrator | Tuesday 02 September 2025 00:48:16 +0000 (0:00:01.506) 0:04:51.779 ***** 2025-09-02 00:55:05.899561 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.899566 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.899571 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.899577 | orchestrator | 2025-09-02 00:55:05.899582 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-02 00:55:05.899588 | orchestrator | Tuesday 02 September 2025 00:48:18 +0000 (0:00:01.787) 0:04:53.567 ***** 2025-09-02 00:55:05.899593 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.899599 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.899604 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.899609 | orchestrator | 2025-09-02 00:55:05.899615 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-02 00:55:05.899620 | orchestrator | Tuesday 02 September 2025 00:48:20 +0000 (0:00:01.992) 0:04:55.560 ***** 2025-09-02 00:55:05.899626 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.899631 | orchestrator | 2025-09-02 00:55:05.899637 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-02 00:55:05.899642 | orchestrator | Tuesday 02 September 2025 00:48:21 +0000 (0:00:00.765) 0:04:56.326 ***** 2025-09-02 00:55:05.899648 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-02 00:55:05.899653 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.899659 | orchestrator | 2025-09-02 00:55:05.899664 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-02 00:55:05.899669 | orchestrator | Tuesday 02 September 2025 00:48:43 +0000 (0:00:22.024) 0:05:18.350 ***** 2025-09-02 00:55:05.899675 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.899680 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.899686 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.899691 | orchestrator | 2025-09-02 00:55:05.899697 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-02 00:55:05.899702 | orchestrator | Tuesday 02 September 2025 00:48:52 +0000 (0:00:09.910) 0:05:28.261 ***** 2025-09-02 00:55:05.899707 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.899713 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.899718 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.899724 | orchestrator | 2025-09-02 00:55:05.899729 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-02 00:55:05.899735 | orchestrator | Tuesday 02 September 2025 00:48:53 +0000 (0:00:00.323) 0:05:28.585 ***** 2025-09-02 00:55:05.899759 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3e8a81fb507cddb93b87b3e7d4e35e66c6b5bc40'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-02 00:55:05.899766 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3e8a81fb507cddb93b87b3e7d4e35e66c6b5bc40'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-02 00:55:05.899777 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3e8a81fb507cddb93b87b3e7d4e35e66c6b5bc40'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-02 00:55:05.899783 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3e8a81fb507cddb93b87b3e7d4e35e66c6b5bc40'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-02 00:55:05.899789 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3e8a81fb507cddb93b87b3e7d4e35e66c6b5bc40'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-02 00:55:05.899797 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3e8a81fb507cddb93b87b3e7d4e35e66c6b5bc40'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__3e8a81fb507cddb93b87b3e7d4e35e66c6b5bc40'}])  2025-09-02 00:55:05.899803 | orchestrator | 2025-09-02 00:55:05.899809 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-02 00:55:05.899815 | orchestrator | Tuesday 02 September 2025 00:49:07 +0000 (0:00:14.714) 0:05:43.300 ***** 2025-09-02 00:55:05.899820 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.899826 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.899831 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.899836 | orchestrator | 2025-09-02 00:55:05.899842 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-02 00:55:05.899847 | orchestrator | Tuesday 02 September 2025 00:49:08 +0000 (0:00:00.347) 0:05:43.647 ***** 2025-09-02 00:55:05.899853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.899858 | orchestrator | 2025-09-02 00:55:05.899864 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-02 00:55:05.899869 | orchestrator | Tuesday 02 September 2025 00:49:09 +0000 (0:00:00.801) 0:05:44.449 ***** 2025-09-02 00:55:05.899875 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.899880 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.899885 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.899891 | orchestrator | 2025-09-02 00:55:05.899896 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-02 00:55:05.899902 | orchestrator | Tuesday 02 September 2025 00:49:09 +0000 (0:00:00.323) 0:05:44.773 ***** 2025-09-02 00:55:05.899907 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.899913 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.899918 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.899923 | orchestrator | 2025-09-02 00:55:05.899929 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-02 00:55:05.899934 | orchestrator | Tuesday 02 September 2025 00:49:09 +0000 (0:00:00.444) 0:05:45.218 ***** 2025-09-02 00:55:05.899940 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-02 00:55:05.899945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-02 00:55:05.899951 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-02 00:55:05.899959 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.899965 | orchestrator | 2025-09-02 00:55:05.899970 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-02 00:55:05.899976 | orchestrator | Tuesday 02 September 2025 00:49:10 +0000 (0:00:00.605) 0:05:45.823 ***** 2025-09-02 00:55:05.899981 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.899986 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.899992 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.899997 | orchestrator | 2025-09-02 00:55:05.900019 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-02 00:55:05.900025 | orchestrator | 2025-09-02 00:55:05.900031 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-02 00:55:05.900036 | orchestrator | Tuesday 02 September 2025 00:49:11 +0000 (0:00:00.875) 0:05:46.699 ***** 2025-09-02 00:55:05.900042 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.900048 | orchestrator | 2025-09-02 00:55:05.900053 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-02 00:55:05.900058 | orchestrator | Tuesday 02 September 2025 00:49:11 +0000 (0:00:00.539) 0:05:47.238 ***** 2025-09-02 00:55:05.900064 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.900069 | orchestrator | 2025-09-02 00:55:05.900075 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-02 00:55:05.900080 | orchestrator | Tuesday 02 September 2025 00:49:12 +0000 (0:00:00.524) 0:05:47.763 ***** 2025-09-02 00:55:05.900086 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.900091 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.900097 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.900102 | orchestrator | 2025-09-02 00:55:05.900107 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-02 00:55:05.900113 | orchestrator | Tuesday 02 September 2025 00:49:13 +0000 (0:00:01.020) 0:05:48.783 ***** 2025-09-02 00:55:05.900118 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900124 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900129 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900135 | orchestrator | 2025-09-02 00:55:05.900140 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-02 00:55:05.900146 | orchestrator | Tuesday 02 September 2025 00:49:13 +0000 (0:00:00.354) 0:05:49.137 ***** 2025-09-02 00:55:05.900151 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900156 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900162 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900167 | orchestrator | 2025-09-02 00:55:05.900173 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-02 00:55:05.900178 | orchestrator | Tuesday 02 September 2025 00:49:14 +0000 (0:00:00.279) 0:05:49.417 ***** 2025-09-02 00:55:05.900184 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900189 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900194 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900200 | orchestrator | 2025-09-02 00:55:05.900205 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-02 00:55:05.900211 | orchestrator | Tuesday 02 September 2025 00:49:14 +0000 (0:00:00.333) 0:05:49.750 ***** 2025-09-02 00:55:05.900216 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.900221 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.900227 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.900232 | orchestrator | 2025-09-02 00:55:05.900238 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-02 00:55:05.900243 | orchestrator | Tuesday 02 September 2025 00:49:15 +0000 (0:00:01.005) 0:05:50.755 ***** 2025-09-02 00:55:05.900249 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900254 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900263 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900268 | orchestrator | 2025-09-02 00:55:05.900274 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-02 00:55:05.900279 | orchestrator | Tuesday 02 September 2025 00:49:15 +0000 (0:00:00.360) 0:05:51.116 ***** 2025-09-02 00:55:05.900285 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900290 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900296 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900301 | orchestrator | 2025-09-02 00:55:05.900306 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-02 00:55:05.900312 | orchestrator | Tuesday 02 September 2025 00:49:16 +0000 (0:00:00.317) 0:05:51.433 ***** 2025-09-02 00:55:05.900317 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.900323 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.900328 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.900334 | orchestrator | 2025-09-02 00:55:05.900339 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-02 00:55:05.900345 | orchestrator | Tuesday 02 September 2025 00:49:16 +0000 (0:00:00.787) 0:05:52.221 ***** 2025-09-02 00:55:05.900350 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.900355 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.900361 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.900366 | orchestrator | 2025-09-02 00:55:05.900372 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-02 00:55:05.900377 | orchestrator | Tuesday 02 September 2025 00:49:17 +0000 (0:00:01.012) 0:05:53.233 ***** 2025-09-02 00:55:05.900383 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900388 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900393 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900399 | orchestrator | 2025-09-02 00:55:05.900404 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-02 00:55:05.900410 | orchestrator | Tuesday 02 September 2025 00:49:18 +0000 (0:00:00.309) 0:05:53.543 ***** 2025-09-02 00:55:05.900415 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.900446 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.900452 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.900458 | orchestrator | 2025-09-02 00:55:05.900463 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-02 00:55:05.900469 | orchestrator | Tuesday 02 September 2025 00:49:18 +0000 (0:00:00.317) 0:05:53.860 ***** 2025-09-02 00:55:05.900474 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900480 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900485 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900490 | orchestrator | 2025-09-02 00:55:05.900496 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-02 00:55:05.900501 | orchestrator | Tuesday 02 September 2025 00:49:18 +0000 (0:00:00.302) 0:05:54.163 ***** 2025-09-02 00:55:05.900507 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900512 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900536 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900542 | orchestrator | 2025-09-02 00:55:05.900548 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-02 00:55:05.900553 | orchestrator | Tuesday 02 September 2025 00:49:19 +0000 (0:00:00.542) 0:05:54.705 ***** 2025-09-02 00:55:05.900559 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900564 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900569 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900575 | orchestrator | 2025-09-02 00:55:05.900580 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-02 00:55:05.900586 | orchestrator | Tuesday 02 September 2025 00:49:19 +0000 (0:00:00.301) 0:05:55.006 ***** 2025-09-02 00:55:05.900591 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900597 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900602 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900607 | orchestrator | 2025-09-02 00:55:05.900616 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-02 00:55:05.900622 | orchestrator | Tuesday 02 September 2025 00:49:20 +0000 (0:00:00.320) 0:05:55.327 ***** 2025-09-02 00:55:05.900627 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900632 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900638 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900643 | orchestrator | 2025-09-02 00:55:05.900649 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-02 00:55:05.900654 | orchestrator | Tuesday 02 September 2025 00:49:20 +0000 (0:00:00.311) 0:05:55.638 ***** 2025-09-02 00:55:05.900660 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.900665 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.900670 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.900676 | orchestrator | 2025-09-02 00:55:05.900681 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-02 00:55:05.900687 | orchestrator | Tuesday 02 September 2025 00:49:20 +0000 (0:00:00.386) 0:05:56.024 ***** 2025-09-02 00:55:05.900692 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.900697 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.900703 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.900708 | orchestrator | 2025-09-02 00:55:05.900714 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-02 00:55:05.900719 | orchestrator | Tuesday 02 September 2025 00:49:21 +0000 (0:00:00.666) 0:05:56.691 ***** 2025-09-02 00:55:05.900725 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.900730 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.900735 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.900741 | orchestrator | 2025-09-02 00:55:05.900746 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-02 00:55:05.900752 | orchestrator | Tuesday 02 September 2025 00:49:21 +0000 (0:00:00.564) 0:05:57.255 ***** 2025-09-02 00:55:05.900757 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-02 00:55:05.900763 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:55:05.900768 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:55:05.900774 | orchestrator | 2025-09-02 00:55:05.900782 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-02 00:55:05.900788 | orchestrator | Tuesday 02 September 2025 00:49:22 +0000 (0:00:00.857) 0:05:58.113 ***** 2025-09-02 00:55:05.900793 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.900799 | orchestrator | 2025-09-02 00:55:05.900804 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-02 00:55:05.900809 | orchestrator | Tuesday 02 September 2025 00:49:23 +0000 (0:00:00.769) 0:05:58.883 ***** 2025-09-02 00:55:05.900815 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.900820 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.900826 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.900831 | orchestrator | 2025-09-02 00:55:05.900837 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-02 00:55:05.900842 | orchestrator | Tuesday 02 September 2025 00:49:24 +0000 (0:00:00.683) 0:05:59.567 ***** 2025-09-02 00:55:05.900847 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.900853 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.900858 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.900864 | orchestrator | 2025-09-02 00:55:05.900869 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-02 00:55:05.900875 | orchestrator | Tuesday 02 September 2025 00:49:24 +0000 (0:00:00.333) 0:05:59.901 ***** 2025-09-02 00:55:05.900880 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-02 00:55:05.900885 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-02 00:55:05.900891 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-02 00:55:05.900896 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-02 00:55:05.900905 | orchestrator | 2025-09-02 00:55:05.900911 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-02 00:55:05.900916 | orchestrator | Tuesday 02 September 2025 00:49:35 +0000 (0:00:10.738) 0:06:10.639 ***** 2025-09-02 00:55:05.900922 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.900927 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.900932 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.900938 | orchestrator | 2025-09-02 00:55:05.900943 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-02 00:55:05.900949 | orchestrator | Tuesday 02 September 2025 00:49:36 +0000 (0:00:00.683) 0:06:11.322 ***** 2025-09-02 00:55:05.900954 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-02 00:55:05.900960 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-02 00:55:05.900965 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-02 00:55:05.900971 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-02 00:55:05.900976 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.900982 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.900987 | orchestrator | 2025-09-02 00:55:05.901009 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-02 00:55:05.901015 | orchestrator | Tuesday 02 September 2025 00:49:38 +0000 (0:00:02.351) 0:06:13.674 ***** 2025-09-02 00:55:05.901021 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-02 00:55:05.901026 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-02 00:55:05.901032 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-02 00:55:05.901037 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-02 00:55:05.901043 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-02 00:55:05.901048 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-02 00:55:05.901054 | orchestrator | 2025-09-02 00:55:05.901059 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-02 00:55:05.901064 | orchestrator | Tuesday 02 September 2025 00:49:39 +0000 (0:00:01.240) 0:06:14.914 ***** 2025-09-02 00:55:05.901070 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.901075 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.901081 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.901086 | orchestrator | 2025-09-02 00:55:05.901092 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-02 00:55:05.901097 | orchestrator | Tuesday 02 September 2025 00:49:40 +0000 (0:00:00.723) 0:06:15.638 ***** 2025-09-02 00:55:05.901102 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.901108 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.901113 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.901119 | orchestrator | 2025-09-02 00:55:05.901124 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-02 00:55:05.901129 | orchestrator | Tuesday 02 September 2025 00:49:40 +0000 (0:00:00.340) 0:06:15.979 ***** 2025-09-02 00:55:05.901135 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.901140 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.901146 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.901151 | orchestrator | 2025-09-02 00:55:05.901157 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-02 00:55:05.901162 | orchestrator | Tuesday 02 September 2025 00:49:41 +0000 (0:00:00.634) 0:06:16.613 ***** 2025-09-02 00:55:05.901167 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.901173 | orchestrator | 2025-09-02 00:55:05.901178 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-02 00:55:05.901184 | orchestrator | Tuesday 02 September 2025 00:49:41 +0000 (0:00:00.543) 0:06:17.156 ***** 2025-09-02 00:55:05.901189 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.901200 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.901206 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.901211 | orchestrator | 2025-09-02 00:55:05.901217 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-02 00:55:05.901222 | orchestrator | Tuesday 02 September 2025 00:49:42 +0000 (0:00:00.329) 0:06:17.486 ***** 2025-09-02 00:55:05.901228 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.901233 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.901238 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.901244 | orchestrator | 2025-09-02 00:55:05.901252 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-02 00:55:05.901257 | orchestrator | Tuesday 02 September 2025 00:49:42 +0000 (0:00:00.554) 0:06:18.041 ***** 2025-09-02 00:55:05.901263 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.901268 | orchestrator | 2025-09-02 00:55:05.901274 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-02 00:55:05.901279 | orchestrator | Tuesday 02 September 2025 00:49:43 +0000 (0:00:00.526) 0:06:18.567 ***** 2025-09-02 00:55:05.901284 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.901290 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.901295 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.901301 | orchestrator | 2025-09-02 00:55:05.901306 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-02 00:55:05.901311 | orchestrator | Tuesday 02 September 2025 00:49:44 +0000 (0:00:01.240) 0:06:19.808 ***** 2025-09-02 00:55:05.901317 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.901322 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.901328 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.901333 | orchestrator | 2025-09-02 00:55:05.901338 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-02 00:55:05.901344 | orchestrator | Tuesday 02 September 2025 00:49:45 +0000 (0:00:01.363) 0:06:21.172 ***** 2025-09-02 00:55:05.901349 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.901355 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.901360 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.901365 | orchestrator | 2025-09-02 00:55:05.901371 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-02 00:55:05.901376 | orchestrator | Tuesday 02 September 2025 00:49:47 +0000 (0:00:01.755) 0:06:22.928 ***** 2025-09-02 00:55:05.901382 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.901387 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.901392 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.901398 | orchestrator | 2025-09-02 00:55:05.901403 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-02 00:55:05.901409 | orchestrator | Tuesday 02 September 2025 00:49:49 +0000 (0:00:02.079) 0:06:25.007 ***** 2025-09-02 00:55:05.901414 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.901420 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.901436 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-02 00:55:05.901441 | orchestrator | 2025-09-02 00:55:05.901447 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-02 00:55:05.901452 | orchestrator | Tuesday 02 September 2025 00:49:50 +0000 (0:00:00.468) 0:06:25.476 ***** 2025-09-02 00:55:05.901458 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-02 00:55:05.901480 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-02 00:55:05.901487 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-02 00:55:05.901493 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-02 00:55:05.901498 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.901507 | orchestrator | 2025-09-02 00:55:05.901513 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-02 00:55:05.901518 | orchestrator | Tuesday 02 September 2025 00:50:14 +0000 (0:00:24.718) 0:06:50.194 ***** 2025-09-02 00:55:05.901524 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.901529 | orchestrator | 2025-09-02 00:55:05.901535 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-02 00:55:05.901540 | orchestrator | Tuesday 02 September 2025 00:50:16 +0000 (0:00:01.767) 0:06:51.962 ***** 2025-09-02 00:55:05.901546 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.901551 | orchestrator | 2025-09-02 00:55:05.901556 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-02 00:55:05.901562 | orchestrator | Tuesday 02 September 2025 00:50:16 +0000 (0:00:00.306) 0:06:52.269 ***** 2025-09-02 00:55:05.901567 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.901573 | orchestrator | 2025-09-02 00:55:05.901578 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-02 00:55:05.901583 | orchestrator | Tuesday 02 September 2025 00:50:17 +0000 (0:00:00.216) 0:06:52.485 ***** 2025-09-02 00:55:05.901589 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-02 00:55:05.901594 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-02 00:55:05.901600 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-02 00:55:05.901605 | orchestrator | 2025-09-02 00:55:05.901611 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-02 00:55:05.901616 | orchestrator | Tuesday 02 September 2025 00:50:23 +0000 (0:00:06.521) 0:06:59.006 ***** 2025-09-02 00:55:05.901621 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-02 00:55:05.901627 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-02 00:55:05.901632 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-02 00:55:05.901637 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-02 00:55:05.901643 | orchestrator | 2025-09-02 00:55:05.901648 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-02 00:55:05.901654 | orchestrator | Tuesday 02 September 2025 00:50:28 +0000 (0:00:04.935) 0:07:03.941 ***** 2025-09-02 00:55:05.901659 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.901665 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.901673 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.901678 | orchestrator | 2025-09-02 00:55:05.901684 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-02 00:55:05.901689 | orchestrator | Tuesday 02 September 2025 00:50:29 +0000 (0:00:01.021) 0:07:04.963 ***** 2025-09-02 00:55:05.901695 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.901700 | orchestrator | 2025-09-02 00:55:05.901706 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-02 00:55:05.901711 | orchestrator | Tuesday 02 September 2025 00:50:30 +0000 (0:00:00.536) 0:07:05.499 ***** 2025-09-02 00:55:05.901716 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.901722 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.901727 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.901733 | orchestrator | 2025-09-02 00:55:05.901738 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-02 00:55:05.901743 | orchestrator | Tuesday 02 September 2025 00:50:30 +0000 (0:00:00.370) 0:07:05.870 ***** 2025-09-02 00:55:05.901749 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.901754 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.901760 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.901765 | orchestrator | 2025-09-02 00:55:05.901770 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-02 00:55:05.901779 | orchestrator | Tuesday 02 September 2025 00:50:31 +0000 (0:00:01.398) 0:07:07.269 ***** 2025-09-02 00:55:05.901785 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-02 00:55:05.901790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-02 00:55:05.901795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-02 00:55:05.901801 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.901806 | orchestrator | 2025-09-02 00:55:05.901811 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-02 00:55:05.901817 | orchestrator | Tuesday 02 September 2025 00:50:32 +0000 (0:00:00.669) 0:07:07.939 ***** 2025-09-02 00:55:05.901822 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.901828 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.901833 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.901838 | orchestrator | 2025-09-02 00:55:05.901844 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-02 00:55:05.901849 | orchestrator | 2025-09-02 00:55:05.901855 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-02 00:55:05.901860 | orchestrator | Tuesday 02 September 2025 00:50:33 +0000 (0:00:00.607) 0:07:08.547 ***** 2025-09-02 00:55:05.901866 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.901871 | orchestrator | 2025-09-02 00:55:05.901876 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-02 00:55:05.901899 | orchestrator | Tuesday 02 September 2025 00:50:33 +0000 (0:00:00.741) 0:07:09.289 ***** 2025-09-02 00:55:05.901906 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.901911 | orchestrator | 2025-09-02 00:55:05.901917 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-02 00:55:05.901922 | orchestrator | Tuesday 02 September 2025 00:50:34 +0000 (0:00:00.543) 0:07:09.833 ***** 2025-09-02 00:55:05.901928 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.901933 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.901939 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.901944 | orchestrator | 2025-09-02 00:55:05.901949 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-02 00:55:05.901955 | orchestrator | Tuesday 02 September 2025 00:50:34 +0000 (0:00:00.285) 0:07:10.118 ***** 2025-09-02 00:55:05.901960 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.901966 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.901971 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.901976 | orchestrator | 2025-09-02 00:55:05.901982 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-02 00:55:05.901987 | orchestrator | Tuesday 02 September 2025 00:50:35 +0000 (0:00:01.081) 0:07:11.200 ***** 2025-09-02 00:55:05.901992 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.901998 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902003 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902009 | orchestrator | 2025-09-02 00:55:05.902035 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-02 00:55:05.902043 | orchestrator | Tuesday 02 September 2025 00:50:36 +0000 (0:00:00.672) 0:07:11.872 ***** 2025-09-02 00:55:05.902048 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902054 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902059 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902064 | orchestrator | 2025-09-02 00:55:05.902070 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-02 00:55:05.902075 | orchestrator | Tuesday 02 September 2025 00:50:37 +0000 (0:00:00.694) 0:07:12.567 ***** 2025-09-02 00:55:05.902081 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902086 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902092 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902148 | orchestrator | 2025-09-02 00:55:05.902156 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-02 00:55:05.902161 | orchestrator | Tuesday 02 September 2025 00:50:37 +0000 (0:00:00.300) 0:07:12.868 ***** 2025-09-02 00:55:05.902167 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902172 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902178 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902183 | orchestrator | 2025-09-02 00:55:05.902189 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-02 00:55:05.902194 | orchestrator | Tuesday 02 September 2025 00:50:38 +0000 (0:00:00.580) 0:07:13.449 ***** 2025-09-02 00:55:05.902200 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902205 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902210 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902216 | orchestrator | 2025-09-02 00:55:05.902224 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-02 00:55:05.902230 | orchestrator | Tuesday 02 September 2025 00:50:38 +0000 (0:00:00.320) 0:07:13.769 ***** 2025-09-02 00:55:05.902235 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902241 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902246 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902251 | orchestrator | 2025-09-02 00:55:05.902257 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-02 00:55:05.902262 | orchestrator | Tuesday 02 September 2025 00:50:39 +0000 (0:00:00.697) 0:07:14.466 ***** 2025-09-02 00:55:05.902268 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902273 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902279 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902284 | orchestrator | 2025-09-02 00:55:05.902289 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-02 00:55:05.902295 | orchestrator | Tuesday 02 September 2025 00:50:39 +0000 (0:00:00.726) 0:07:15.193 ***** 2025-09-02 00:55:05.902300 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902306 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902311 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902317 | orchestrator | 2025-09-02 00:55:05.902322 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-02 00:55:05.902328 | orchestrator | Tuesday 02 September 2025 00:50:40 +0000 (0:00:00.584) 0:07:15.778 ***** 2025-09-02 00:55:05.902333 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902338 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902344 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902349 | orchestrator | 2025-09-02 00:55:05.902355 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-02 00:55:05.902360 | orchestrator | Tuesday 02 September 2025 00:50:40 +0000 (0:00:00.314) 0:07:16.092 ***** 2025-09-02 00:55:05.902366 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902371 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902376 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902382 | orchestrator | 2025-09-02 00:55:05.902387 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-02 00:55:05.902393 | orchestrator | Tuesday 02 September 2025 00:50:41 +0000 (0:00:00.328) 0:07:16.420 ***** 2025-09-02 00:55:05.902398 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902404 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902409 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902415 | orchestrator | 2025-09-02 00:55:05.902420 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-02 00:55:05.902471 | orchestrator | Tuesday 02 September 2025 00:50:41 +0000 (0:00:00.324) 0:07:16.745 ***** 2025-09-02 00:55:05.902477 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902482 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902488 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902493 | orchestrator | 2025-09-02 00:55:05.902499 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-02 00:55:05.902509 | orchestrator | Tuesday 02 September 2025 00:50:42 +0000 (0:00:00.632) 0:07:17.377 ***** 2025-09-02 00:55:05.902518 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902524 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902529 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902535 | orchestrator | 2025-09-02 00:55:05.902540 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-02 00:55:05.902546 | orchestrator | Tuesday 02 September 2025 00:50:42 +0000 (0:00:00.296) 0:07:17.674 ***** 2025-09-02 00:55:05.902551 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902557 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902562 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902567 | orchestrator | 2025-09-02 00:55:05.902573 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-02 00:55:05.902578 | orchestrator | Tuesday 02 September 2025 00:50:42 +0000 (0:00:00.322) 0:07:17.996 ***** 2025-09-02 00:55:05.902584 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902589 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902595 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902600 | orchestrator | 2025-09-02 00:55:05.902605 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-02 00:55:05.902611 | orchestrator | Tuesday 02 September 2025 00:50:43 +0000 (0:00:00.318) 0:07:18.315 ***** 2025-09-02 00:55:05.902616 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902622 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902627 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902632 | orchestrator | 2025-09-02 00:55:05.902638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-02 00:55:05.902643 | orchestrator | Tuesday 02 September 2025 00:50:43 +0000 (0:00:00.595) 0:07:18.911 ***** 2025-09-02 00:55:05.902647 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902652 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902657 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902662 | orchestrator | 2025-09-02 00:55:05.902667 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-02 00:55:05.902671 | orchestrator | Tuesday 02 September 2025 00:50:44 +0000 (0:00:00.518) 0:07:19.430 ***** 2025-09-02 00:55:05.902676 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902681 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902686 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902691 | orchestrator | 2025-09-02 00:55:05.902695 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-02 00:55:05.902700 | orchestrator | Tuesday 02 September 2025 00:50:44 +0000 (0:00:00.329) 0:07:19.759 ***** 2025-09-02 00:55:05.902705 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-02 00:55:05.902710 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:55:05.902715 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:55:05.902720 | orchestrator | 2025-09-02 00:55:05.902725 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-02 00:55:05.902729 | orchestrator | Tuesday 02 September 2025 00:50:45 +0000 (0:00:00.921) 0:07:20.681 ***** 2025-09-02 00:55:05.902737 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.902742 | orchestrator | 2025-09-02 00:55:05.902747 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-02 00:55:05.902752 | orchestrator | Tuesday 02 September 2025 00:50:46 +0000 (0:00:00.796) 0:07:21.477 ***** 2025-09-02 00:55:05.902757 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902762 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902766 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902771 | orchestrator | 2025-09-02 00:55:05.902776 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-02 00:55:05.902784 | orchestrator | Tuesday 02 September 2025 00:50:46 +0000 (0:00:00.308) 0:07:21.785 ***** 2025-09-02 00:55:05.902789 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902794 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902799 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902803 | orchestrator | 2025-09-02 00:55:05.902808 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-02 00:55:05.902813 | orchestrator | Tuesday 02 September 2025 00:50:46 +0000 (0:00:00.332) 0:07:22.117 ***** 2025-09-02 00:55:05.902818 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902823 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902828 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902832 | orchestrator | 2025-09-02 00:55:05.902837 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-02 00:55:05.902842 | orchestrator | Tuesday 02 September 2025 00:50:47 +0000 (0:00:00.844) 0:07:22.962 ***** 2025-09-02 00:55:05.902847 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.902852 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.902856 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.902861 | orchestrator | 2025-09-02 00:55:05.902866 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-02 00:55:05.902871 | orchestrator | Tuesday 02 September 2025 00:50:48 +0000 (0:00:00.393) 0:07:23.355 ***** 2025-09-02 00:55:05.902875 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-02 00:55:05.902880 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-02 00:55:05.902885 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-02 00:55:05.902890 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-02 00:55:05.902895 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-02 00:55:05.902900 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-02 00:55:05.902904 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-02 00:55:05.902913 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-02 00:55:05.902918 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-02 00:55:05.902922 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-02 00:55:05.902927 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-02 00:55:05.902932 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-02 00:55:05.902937 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-02 00:55:05.902942 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-02 00:55:05.902947 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-02 00:55:05.902952 | orchestrator | 2025-09-02 00:55:05.902956 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-02 00:55:05.902961 | orchestrator | Tuesday 02 September 2025 00:50:50 +0000 (0:00:02.303) 0:07:25.659 ***** 2025-09-02 00:55:05.902966 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.902971 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.902976 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.902981 | orchestrator | 2025-09-02 00:55:05.902986 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-02 00:55:05.902991 | orchestrator | Tuesday 02 September 2025 00:50:50 +0000 (0:00:00.300) 0:07:25.960 ***** 2025-09-02 00:55:05.902996 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.903003 | orchestrator | 2025-09-02 00:55:05.903008 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-02 00:55:05.903013 | orchestrator | Tuesday 02 September 2025 00:50:51 +0000 (0:00:00.763) 0:07:26.723 ***** 2025-09-02 00:55:05.903018 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-02 00:55:05.903023 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-02 00:55:05.903028 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-02 00:55:05.903033 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-02 00:55:05.903038 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-02 00:55:05.903043 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-02 00:55:05.903047 | orchestrator | 2025-09-02 00:55:05.903052 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-02 00:55:05.903057 | orchestrator | Tuesday 02 September 2025 00:50:52 +0000 (0:00:01.046) 0:07:27.769 ***** 2025-09-02 00:55:05.903062 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.903069 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-02 00:55:05.903074 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-02 00:55:05.903079 | orchestrator | 2025-09-02 00:55:05.903084 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-02 00:55:05.903089 | orchestrator | Tuesday 02 September 2025 00:50:54 +0000 (0:00:02.055) 0:07:29.825 ***** 2025-09-02 00:55:05.903094 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-02 00:55:05.903099 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-02 00:55:05.903103 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.903108 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-02 00:55:05.903113 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-02 00:55:05.903118 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.903123 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-02 00:55:05.903128 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-02 00:55:05.903133 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.903137 | orchestrator | 2025-09-02 00:55:05.903142 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-02 00:55:05.903147 | orchestrator | Tuesday 02 September 2025 00:50:55 +0000 (0:00:01.151) 0:07:30.976 ***** 2025-09-02 00:55:05.903152 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.903157 | orchestrator | 2025-09-02 00:55:05.903162 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-02 00:55:05.903167 | orchestrator | Tuesday 02 September 2025 00:50:58 +0000 (0:00:02.559) 0:07:33.536 ***** 2025-09-02 00:55:05.903172 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.903177 | orchestrator | 2025-09-02 00:55:05.903182 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-02 00:55:05.903186 | orchestrator | Tuesday 02 September 2025 00:50:58 +0000 (0:00:00.529) 0:07:34.066 ***** 2025-09-02 00:55:05.903191 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c', 'data_vg': 'ceph-13b5fa21-9dd3-5f23-9982-99f7e2a8b07c'}) 2025-09-02 00:55:05.903197 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ad19e49-f824-57b0-a164-7b3912efd317', 'data_vg': 'ceph-7ad19e49-f824-57b0-a164-7b3912efd317'}) 2025-09-02 00:55:05.903202 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de858a7c-8c7c-5154-a7df-793b28d7d942', 'data_vg': 'ceph-de858a7c-8c7c-5154-a7df-793b28d7d942'}) 2025-09-02 00:55:05.903207 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-688b3bb6-a638-5f84-8470-ce7969c766cd', 'data_vg': 'ceph-688b3bb6-a638-5f84-8470-ce7969c766cd'}) 2025-09-02 00:55:05.903217 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-14a05dcf-7776-5f2b-8543-65494bada47a', 'data_vg': 'ceph-14a05dcf-7776-5f2b-8543-65494bada47a'}) 2025-09-02 00:55:05.903222 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4843a7b7-fb51-5101-86f0-3e9039878e37', 'data_vg': 'ceph-4843a7b7-fb51-5101-86f0-3e9039878e37'}) 2025-09-02 00:55:05.903227 | orchestrator | 2025-09-02 00:55:05.903232 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-02 00:55:05.903237 | orchestrator | Tuesday 02 September 2025 00:51:40 +0000 (0:00:42.100) 0:08:16.166 ***** 2025-09-02 00:55:05.903242 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903247 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.903252 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.903257 | orchestrator | 2025-09-02 00:55:05.903262 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-02 00:55:05.903266 | orchestrator | Tuesday 02 September 2025 00:51:41 +0000 (0:00:00.547) 0:08:16.714 ***** 2025-09-02 00:55:05.903271 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.903276 | orchestrator | 2025-09-02 00:55:05.903281 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-02 00:55:05.903286 | orchestrator | Tuesday 02 September 2025 00:51:41 +0000 (0:00:00.544) 0:08:17.258 ***** 2025-09-02 00:55:05.903291 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.903296 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.903301 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.903306 | orchestrator | 2025-09-02 00:55:05.903311 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-02 00:55:05.903315 | orchestrator | Tuesday 02 September 2025 00:51:42 +0000 (0:00:00.669) 0:08:17.927 ***** 2025-09-02 00:55:05.903320 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.903325 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.903330 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.903335 | orchestrator | 2025-09-02 00:55:05.903340 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-02 00:55:05.903345 | orchestrator | Tuesday 02 September 2025 00:51:45 +0000 (0:00:02.878) 0:08:20.806 ***** 2025-09-02 00:55:05.903350 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.903354 | orchestrator | 2025-09-02 00:55:05.903359 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-02 00:55:05.903364 | orchestrator | Tuesday 02 September 2025 00:51:46 +0000 (0:00:00.528) 0:08:21.335 ***** 2025-09-02 00:55:05.903369 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.903374 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.903379 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.903384 | orchestrator | 2025-09-02 00:55:05.903389 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-02 00:55:05.903396 | orchestrator | Tuesday 02 September 2025 00:51:47 +0000 (0:00:01.192) 0:08:22.527 ***** 2025-09-02 00:55:05.903401 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.903406 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.903411 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.903416 | orchestrator | 2025-09-02 00:55:05.903428 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-02 00:55:05.903433 | orchestrator | Tuesday 02 September 2025 00:51:48 +0000 (0:00:01.420) 0:08:23.947 ***** 2025-09-02 00:55:05.903438 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.903443 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.903447 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.903452 | orchestrator | 2025-09-02 00:55:05.903457 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-02 00:55:05.903462 | orchestrator | Tuesday 02 September 2025 00:51:50 +0000 (0:00:01.695) 0:08:25.643 ***** 2025-09-02 00:55:05.903470 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903474 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.903479 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.903484 | orchestrator | 2025-09-02 00:55:05.903489 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-02 00:55:05.903494 | orchestrator | Tuesday 02 September 2025 00:51:50 +0000 (0:00:00.342) 0:08:25.985 ***** 2025-09-02 00:55:05.903499 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903503 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.903508 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.903513 | orchestrator | 2025-09-02 00:55:05.903518 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-02 00:55:05.903523 | orchestrator | Tuesday 02 September 2025 00:51:50 +0000 (0:00:00.320) 0:08:26.305 ***** 2025-09-02 00:55:05.903528 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-02 00:55:05.903532 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-02 00:55:05.903537 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-02 00:55:05.903542 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-09-02 00:55:05.903547 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-02 00:55:05.903551 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-09-02 00:55:05.903556 | orchestrator | 2025-09-02 00:55:05.903561 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-02 00:55:05.903566 | orchestrator | Tuesday 02 September 2025 00:51:52 +0000 (0:00:01.287) 0:08:27.592 ***** 2025-09-02 00:55:05.903571 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-02 00:55:05.903576 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-02 00:55:05.903580 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-02 00:55:05.903585 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-02 00:55:05.903590 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-02 00:55:05.903595 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-02 00:55:05.903600 | orchestrator | 2025-09-02 00:55:05.903605 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-02 00:55:05.903609 | orchestrator | Tuesday 02 September 2025 00:51:54 +0000 (0:00:02.312) 0:08:29.905 ***** 2025-09-02 00:55:05.903617 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-02 00:55:05.903622 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-02 00:55:05.903626 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-02 00:55:05.903631 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-02 00:55:05.903636 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-02 00:55:05.903641 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-02 00:55:05.903646 | orchestrator | 2025-09-02 00:55:05.903651 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-02 00:55:05.903655 | orchestrator | Tuesday 02 September 2025 00:51:58 +0000 (0:00:03.404) 0:08:33.309 ***** 2025-09-02 00:55:05.903660 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903665 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.903670 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.903675 | orchestrator | 2025-09-02 00:55:05.903680 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-02 00:55:05.903684 | orchestrator | Tuesday 02 September 2025 00:52:00 +0000 (0:00:02.385) 0:08:35.695 ***** 2025-09-02 00:55:05.903689 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903694 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.903699 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-02 00:55:05.903704 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.903709 | orchestrator | 2025-09-02 00:55:05.903713 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-02 00:55:05.903718 | orchestrator | Tuesday 02 September 2025 00:52:13 +0000 (0:00:13.055) 0:08:48.750 ***** 2025-09-02 00:55:05.903728 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903732 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.903737 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.903742 | orchestrator | 2025-09-02 00:55:05.903747 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-02 00:55:05.903752 | orchestrator | Tuesday 02 September 2025 00:52:14 +0000 (0:00:00.897) 0:08:49.648 ***** 2025-09-02 00:55:05.903757 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903761 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.903766 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.903771 | orchestrator | 2025-09-02 00:55:05.903776 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-02 00:55:05.903781 | orchestrator | Tuesday 02 September 2025 00:52:14 +0000 (0:00:00.584) 0:08:50.233 ***** 2025-09-02 00:55:05.903786 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.903790 | orchestrator | 2025-09-02 00:55:05.903795 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-02 00:55:05.903800 | orchestrator | Tuesday 02 September 2025 00:52:15 +0000 (0:00:00.596) 0:08:50.829 ***** 2025-09-02 00:55:05.903805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.903812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.903817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.903822 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903827 | orchestrator | 2025-09-02 00:55:05.903832 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-02 00:55:05.903837 | orchestrator | Tuesday 02 September 2025 00:52:15 +0000 (0:00:00.395) 0:08:51.224 ***** 2025-09-02 00:55:05.903841 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903846 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.903851 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.903856 | orchestrator | 2025-09-02 00:55:05.903861 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-02 00:55:05.903866 | orchestrator | Tuesday 02 September 2025 00:52:16 +0000 (0:00:00.302) 0:08:51.527 ***** 2025-09-02 00:55:05.903871 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903875 | orchestrator | 2025-09-02 00:55:05.903880 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-02 00:55:05.903885 | orchestrator | Tuesday 02 September 2025 00:52:16 +0000 (0:00:00.234) 0:08:51.762 ***** 2025-09-02 00:55:05.903890 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903895 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.903899 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.903904 | orchestrator | 2025-09-02 00:55:05.903909 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-02 00:55:05.903914 | orchestrator | Tuesday 02 September 2025 00:52:17 +0000 (0:00:00.671) 0:08:52.434 ***** 2025-09-02 00:55:05.903919 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903924 | orchestrator | 2025-09-02 00:55:05.903928 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-02 00:55:05.903933 | orchestrator | Tuesday 02 September 2025 00:52:17 +0000 (0:00:00.253) 0:08:52.687 ***** 2025-09-02 00:55:05.903938 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903943 | orchestrator | 2025-09-02 00:55:05.903948 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-02 00:55:05.903952 | orchestrator | Tuesday 02 September 2025 00:52:17 +0000 (0:00:00.214) 0:08:52.901 ***** 2025-09-02 00:55:05.903957 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903962 | orchestrator | 2025-09-02 00:55:05.903967 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-02 00:55:05.903972 | orchestrator | Tuesday 02 September 2025 00:52:17 +0000 (0:00:00.124) 0:08:53.026 ***** 2025-09-02 00:55:05.903979 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.903984 | orchestrator | 2025-09-02 00:55:05.903989 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-02 00:55:05.903994 | orchestrator | Tuesday 02 September 2025 00:52:17 +0000 (0:00:00.219) 0:08:53.246 ***** 2025-09-02 00:55:05.903999 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904004 | orchestrator | 2025-09-02 00:55:05.904008 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-02 00:55:05.904016 | orchestrator | Tuesday 02 September 2025 00:52:18 +0000 (0:00:00.228) 0:08:53.474 ***** 2025-09-02 00:55:05.904021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.904026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.904030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.904035 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904040 | orchestrator | 2025-09-02 00:55:05.904045 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-02 00:55:05.904050 | orchestrator | Tuesday 02 September 2025 00:52:18 +0000 (0:00:00.380) 0:08:53.854 ***** 2025-09-02 00:55:05.904055 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904060 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904065 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904069 | orchestrator | 2025-09-02 00:55:05.904074 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-02 00:55:05.904079 | orchestrator | Tuesday 02 September 2025 00:52:18 +0000 (0:00:00.324) 0:08:54.178 ***** 2025-09-02 00:55:05.904084 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904089 | orchestrator | 2025-09-02 00:55:05.904094 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-02 00:55:05.904099 | orchestrator | Tuesday 02 September 2025 00:52:19 +0000 (0:00:00.780) 0:08:54.959 ***** 2025-09-02 00:55:05.904103 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904108 | orchestrator | 2025-09-02 00:55:05.904113 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-02 00:55:05.904118 | orchestrator | 2025-09-02 00:55:05.904123 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-02 00:55:05.904128 | orchestrator | Tuesday 02 September 2025 00:52:20 +0000 (0:00:00.694) 0:08:55.654 ***** 2025-09-02 00:55:05.904133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.904138 | orchestrator | 2025-09-02 00:55:05.904143 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-02 00:55:05.904148 | orchestrator | Tuesday 02 September 2025 00:52:21 +0000 (0:00:01.225) 0:08:56.880 ***** 2025-09-02 00:55:05.904152 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.904157 | orchestrator | 2025-09-02 00:55:05.904162 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-02 00:55:05.904167 | orchestrator | Tuesday 02 September 2025 00:52:22 +0000 (0:00:01.168) 0:08:58.049 ***** 2025-09-02 00:55:05.904172 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904177 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904181 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904186 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.904191 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.904198 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.904204 | orchestrator | 2025-09-02 00:55:05.904208 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-02 00:55:05.904213 | orchestrator | Tuesday 02 September 2025 00:52:23 +0000 (0:00:01.211) 0:08:59.260 ***** 2025-09-02 00:55:05.904218 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904226 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904231 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904235 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904240 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904245 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904250 | orchestrator | 2025-09-02 00:55:05.904255 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-02 00:55:05.904260 | orchestrator | Tuesday 02 September 2025 00:52:24 +0000 (0:00:00.741) 0:09:00.002 ***** 2025-09-02 00:55:05.904264 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904269 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904274 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904279 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904284 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904289 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904293 | orchestrator | 2025-09-02 00:55:05.904298 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-02 00:55:05.904303 | orchestrator | Tuesday 02 September 2025 00:52:25 +0000 (0:00:00.920) 0:09:00.922 ***** 2025-09-02 00:55:05.904308 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904313 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904318 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904322 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904327 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904332 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904337 | orchestrator | 2025-09-02 00:55:05.904342 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-02 00:55:05.904346 | orchestrator | Tuesday 02 September 2025 00:52:26 +0000 (0:00:00.766) 0:09:01.689 ***** 2025-09-02 00:55:05.904351 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904356 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904361 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904366 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.904371 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.904375 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.904380 | orchestrator | 2025-09-02 00:55:05.904385 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-02 00:55:05.904390 | orchestrator | Tuesday 02 September 2025 00:52:27 +0000 (0:00:01.001) 0:09:02.690 ***** 2025-09-02 00:55:05.904395 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904400 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904404 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904409 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904414 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904419 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904431 | orchestrator | 2025-09-02 00:55:05.904436 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-02 00:55:05.904443 | orchestrator | Tuesday 02 September 2025 00:52:28 +0000 (0:00:00.890) 0:09:03.580 ***** 2025-09-02 00:55:05.904449 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904453 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904458 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904463 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904468 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904473 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904477 | orchestrator | 2025-09-02 00:55:05.904482 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-02 00:55:05.904487 | orchestrator | Tuesday 02 September 2025 00:52:28 +0000 (0:00:00.624) 0:09:04.205 ***** 2025-09-02 00:55:05.904492 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904497 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904502 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904506 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.904511 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.904519 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.904524 | orchestrator | 2025-09-02 00:55:05.904528 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-02 00:55:05.904533 | orchestrator | Tuesday 02 September 2025 00:52:30 +0000 (0:00:01.393) 0:09:05.598 ***** 2025-09-02 00:55:05.904538 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904543 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904548 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904553 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.904557 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.904562 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.904567 | orchestrator | 2025-09-02 00:55:05.904572 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-02 00:55:05.904577 | orchestrator | Tuesday 02 September 2025 00:52:31 +0000 (0:00:01.004) 0:09:06.603 ***** 2025-09-02 00:55:05.904582 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904586 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904591 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904596 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904601 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904606 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904610 | orchestrator | 2025-09-02 00:55:05.904615 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-02 00:55:05.904620 | orchestrator | Tuesday 02 September 2025 00:52:32 +0000 (0:00:00.868) 0:09:07.472 ***** 2025-09-02 00:55:05.904625 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904630 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904635 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904640 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.904644 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.904649 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.904654 | orchestrator | 2025-09-02 00:55:05.904659 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-02 00:55:05.904664 | orchestrator | Tuesday 02 September 2025 00:52:32 +0000 (0:00:00.591) 0:09:08.063 ***** 2025-09-02 00:55:05.904669 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904674 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904678 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904683 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904690 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904695 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904700 | orchestrator | 2025-09-02 00:55:05.904705 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-02 00:55:05.904710 | orchestrator | Tuesday 02 September 2025 00:52:33 +0000 (0:00:00.856) 0:09:08.920 ***** 2025-09-02 00:55:05.904715 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904720 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904725 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904730 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904735 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904740 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904744 | orchestrator | 2025-09-02 00:55:05.904749 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-02 00:55:05.904754 | orchestrator | Tuesday 02 September 2025 00:52:34 +0000 (0:00:00.598) 0:09:09.518 ***** 2025-09-02 00:55:05.904759 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904764 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904769 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904774 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904778 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904783 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904788 | orchestrator | 2025-09-02 00:55:05.904793 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-02 00:55:05.904798 | orchestrator | Tuesday 02 September 2025 00:52:35 +0000 (0:00:00.863) 0:09:10.382 ***** 2025-09-02 00:55:05.904806 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904811 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904815 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904820 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904825 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904830 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904835 | orchestrator | 2025-09-02 00:55:05.904840 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-02 00:55:05.904845 | orchestrator | Tuesday 02 September 2025 00:52:35 +0000 (0:00:00.706) 0:09:11.088 ***** 2025-09-02 00:55:05.904849 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904854 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904859 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904864 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:05.904869 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:05.904874 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:05.904878 | orchestrator | 2025-09-02 00:55:05.904883 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-02 00:55:05.904888 | orchestrator | Tuesday 02 September 2025 00:52:36 +0000 (0:00:00.855) 0:09:11.943 ***** 2025-09-02 00:55:05.904893 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.904898 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.904903 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.904908 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.904913 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.904917 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.904922 | orchestrator | 2025-09-02 00:55:05.904929 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-02 00:55:05.904934 | orchestrator | Tuesday 02 September 2025 00:52:37 +0000 (0:00:00.647) 0:09:12.591 ***** 2025-09-02 00:55:05.904939 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904944 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904949 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904954 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.904959 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.904964 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.904969 | orchestrator | 2025-09-02 00:55:05.904974 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-02 00:55:05.904978 | orchestrator | Tuesday 02 September 2025 00:52:38 +0000 (0:00:00.872) 0:09:13.464 ***** 2025-09-02 00:55:05.904983 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.904988 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.904993 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.904998 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.905003 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.905007 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.905012 | orchestrator | 2025-09-02 00:55:05.905017 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-02 00:55:05.905022 | orchestrator | Tuesday 02 September 2025 00:52:39 +0000 (0:00:01.220) 0:09:14.685 ***** 2025-09-02 00:55:05.905027 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.905032 | orchestrator | 2025-09-02 00:55:05.905037 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-02 00:55:05.905042 | orchestrator | Tuesday 02 September 2025 00:52:43 +0000 (0:00:04.032) 0:09:18.717 ***** 2025-09-02 00:55:05.905047 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.905052 | orchestrator | 2025-09-02 00:55:05.905056 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-02 00:55:05.905061 | orchestrator | Tuesday 02 September 2025 00:52:45 +0000 (0:00:01.987) 0:09:20.704 ***** 2025-09-02 00:55:05.905066 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.905071 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.905076 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.905084 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.905089 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.905094 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.905098 | orchestrator | 2025-09-02 00:55:05.905103 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-02 00:55:05.905108 | orchestrator | Tuesday 02 September 2025 00:52:46 +0000 (0:00:01.558) 0:09:22.263 ***** 2025-09-02 00:55:05.905113 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.905118 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.905123 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.905128 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.905132 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.905137 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.905142 | orchestrator | 2025-09-02 00:55:05.905147 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-02 00:55:05.905152 | orchestrator | Tuesday 02 September 2025 00:52:48 +0000 (0:00:01.303) 0:09:23.567 ***** 2025-09-02 00:55:05.905159 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.905165 | orchestrator | 2025-09-02 00:55:05.905170 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-02 00:55:05.905174 | orchestrator | Tuesday 02 September 2025 00:52:49 +0000 (0:00:01.281) 0:09:24.848 ***** 2025-09-02 00:55:05.905179 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.905184 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.905189 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.905194 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.905199 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.905204 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.905208 | orchestrator | 2025-09-02 00:55:05.905213 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-02 00:55:05.905218 | orchestrator | Tuesday 02 September 2025 00:52:51 +0000 (0:00:01.757) 0:09:26.605 ***** 2025-09-02 00:55:05.905223 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.905228 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.905233 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.905238 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.905242 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.905247 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.905252 | orchestrator | 2025-09-02 00:55:05.905257 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-02 00:55:05.905262 | orchestrator | Tuesday 02 September 2025 00:52:55 +0000 (0:00:03.926) 0:09:30.532 ***** 2025-09-02 00:55:05.905267 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:05.905272 | orchestrator | 2025-09-02 00:55:05.905277 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-02 00:55:05.905282 | orchestrator | Tuesday 02 September 2025 00:52:56 +0000 (0:00:01.408) 0:09:31.940 ***** 2025-09-02 00:55:05.905287 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905291 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.905296 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905301 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.905306 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.905311 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905316 | orchestrator | 2025-09-02 00:55:05.905321 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-02 00:55:05.905325 | orchestrator | Tuesday 02 September 2025 00:52:57 +0000 (0:00:00.869) 0:09:32.810 ***** 2025-09-02 00:55:05.905330 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.905335 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.905340 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.905348 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:05.905352 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:05.905357 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:05.905362 | orchestrator | 2025-09-02 00:55:05.905369 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-02 00:55:05.905374 | orchestrator | Tuesday 02 September 2025 00:53:00 +0000 (0:00:03.333) 0:09:36.143 ***** 2025-09-02 00:55:05.905379 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905384 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905389 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905394 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:05.905398 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:05.905403 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:05.905408 | orchestrator | 2025-09-02 00:55:05.905413 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-02 00:55:05.905418 | orchestrator | 2025-09-02 00:55:05.905430 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-02 00:55:05.905435 | orchestrator | Tuesday 02 September 2025 00:53:01 +0000 (0:00:00.953) 0:09:37.096 ***** 2025-09-02 00:55:05.905439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.905444 | orchestrator | 2025-09-02 00:55:05.905449 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-02 00:55:05.905454 | orchestrator | Tuesday 02 September 2025 00:53:02 +0000 (0:00:00.861) 0:09:37.957 ***** 2025-09-02 00:55:05.905459 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.905464 | orchestrator | 2025-09-02 00:55:05.905469 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-02 00:55:05.905474 | orchestrator | Tuesday 02 September 2025 00:53:03 +0000 (0:00:00.635) 0:09:38.593 ***** 2025-09-02 00:55:05.905478 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.905483 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.905488 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.905493 | orchestrator | 2025-09-02 00:55:05.905498 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-02 00:55:05.905503 | orchestrator | Tuesday 02 September 2025 00:53:04 +0000 (0:00:00.752) 0:09:39.346 ***** 2025-09-02 00:55:05.905508 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905513 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905518 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905522 | orchestrator | 2025-09-02 00:55:05.905527 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-02 00:55:05.905532 | orchestrator | Tuesday 02 September 2025 00:53:04 +0000 (0:00:00.754) 0:09:40.100 ***** 2025-09-02 00:55:05.905537 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905542 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905547 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905552 | orchestrator | 2025-09-02 00:55:05.905556 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-02 00:55:05.905561 | orchestrator | Tuesday 02 September 2025 00:53:05 +0000 (0:00:00.869) 0:09:40.969 ***** 2025-09-02 00:55:05.905566 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905571 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905576 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905581 | orchestrator | 2025-09-02 00:55:05.905588 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-02 00:55:05.905593 | orchestrator | Tuesday 02 September 2025 00:53:06 +0000 (0:00:00.794) 0:09:41.763 ***** 2025-09-02 00:55:05.905598 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.905603 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.905608 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.905612 | orchestrator | 2025-09-02 00:55:05.905617 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-02 00:55:05.905625 | orchestrator | Tuesday 02 September 2025 00:53:07 +0000 (0:00:00.759) 0:09:42.523 ***** 2025-09-02 00:55:05.905630 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.905635 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.905640 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.905645 | orchestrator | 2025-09-02 00:55:05.905650 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-02 00:55:05.905654 | orchestrator | Tuesday 02 September 2025 00:53:07 +0000 (0:00:00.353) 0:09:42.876 ***** 2025-09-02 00:55:05.905659 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.905664 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.905669 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.905674 | orchestrator | 2025-09-02 00:55:05.905679 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-02 00:55:05.905684 | orchestrator | Tuesday 02 September 2025 00:53:07 +0000 (0:00:00.297) 0:09:43.173 ***** 2025-09-02 00:55:05.905689 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905693 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905698 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905703 | orchestrator | 2025-09-02 00:55:05.905708 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-02 00:55:05.905713 | orchestrator | Tuesday 02 September 2025 00:53:08 +0000 (0:00:00.848) 0:09:44.021 ***** 2025-09-02 00:55:05.905718 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905723 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905728 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905733 | orchestrator | 2025-09-02 00:55:05.905738 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-02 00:55:05.905743 | orchestrator | Tuesday 02 September 2025 00:53:09 +0000 (0:00:01.269) 0:09:45.291 ***** 2025-09-02 00:55:05.905747 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.905752 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.905757 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.905762 | orchestrator | 2025-09-02 00:55:05.905767 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-02 00:55:05.905772 | orchestrator | Tuesday 02 September 2025 00:53:10 +0000 (0:00:00.439) 0:09:45.730 ***** 2025-09-02 00:55:05.905777 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.905782 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.905787 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.905791 | orchestrator | 2025-09-02 00:55:05.905796 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-02 00:55:05.905803 | orchestrator | Tuesday 02 September 2025 00:53:10 +0000 (0:00:00.322) 0:09:46.053 ***** 2025-09-02 00:55:05.905808 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905813 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905818 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905823 | orchestrator | 2025-09-02 00:55:05.905828 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-02 00:55:05.905833 | orchestrator | Tuesday 02 September 2025 00:53:11 +0000 (0:00:00.353) 0:09:46.406 ***** 2025-09-02 00:55:05.905838 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905843 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905848 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905852 | orchestrator | 2025-09-02 00:55:05.905857 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-02 00:55:05.905862 | orchestrator | Tuesday 02 September 2025 00:53:11 +0000 (0:00:00.693) 0:09:47.100 ***** 2025-09-02 00:55:05.905867 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905872 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905877 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.905882 | orchestrator | 2025-09-02 00:55:05.905887 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-02 00:55:05.905892 | orchestrator | Tuesday 02 September 2025 00:53:12 +0000 (0:00:00.377) 0:09:47.477 ***** 2025-09-02 00:55:05.905899 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.905904 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.905909 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.905914 | orchestrator | 2025-09-02 00:55:05.905919 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-02 00:55:05.905924 | orchestrator | Tuesday 02 September 2025 00:53:12 +0000 (0:00:00.308) 0:09:47.786 ***** 2025-09-02 00:55:05.905929 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.905934 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.905939 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.905944 | orchestrator | 2025-09-02 00:55:05.905948 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-02 00:55:05.905953 | orchestrator | Tuesday 02 September 2025 00:53:12 +0000 (0:00:00.270) 0:09:48.057 ***** 2025-09-02 00:55:05.905958 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.905963 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.905968 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.905973 | orchestrator | 2025-09-02 00:55:05.905978 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-02 00:55:05.905983 | orchestrator | Tuesday 02 September 2025 00:53:13 +0000 (0:00:00.507) 0:09:48.564 ***** 2025-09-02 00:55:05.905987 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.905992 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.905997 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.906002 | orchestrator | 2025-09-02 00:55:05.906007 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-02 00:55:05.906012 | orchestrator | Tuesday 02 September 2025 00:53:13 +0000 (0:00:00.380) 0:09:48.945 ***** 2025-09-02 00:55:05.906031 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.906036 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.906041 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.906046 | orchestrator | 2025-09-02 00:55:05.906051 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-02 00:55:05.906058 | orchestrator | Tuesday 02 September 2025 00:53:14 +0000 (0:00:00.643) 0:09:49.589 ***** 2025-09-02 00:55:05.906063 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.906067 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.906072 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-02 00:55:05.906077 | orchestrator | 2025-09-02 00:55:05.906082 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-02 00:55:05.906087 | orchestrator | Tuesday 02 September 2025 00:53:15 +0000 (0:00:00.770) 0:09:50.360 ***** 2025-09-02 00:55:05.906092 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.906097 | orchestrator | 2025-09-02 00:55:05.906101 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-02 00:55:05.906106 | orchestrator | Tuesday 02 September 2025 00:53:17 +0000 (0:00:02.501) 0:09:52.861 ***** 2025-09-02 00:55:05.906112 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-02 00:55:05.906118 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.906123 | orchestrator | 2025-09-02 00:55:05.906128 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-02 00:55:05.906132 | orchestrator | Tuesday 02 September 2025 00:53:17 +0000 (0:00:00.238) 0:09:53.100 ***** 2025-09-02 00:55:05.906138 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-02 00:55:05.906146 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-02 00:55:05.906156 | orchestrator | 2025-09-02 00:55:05.906161 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-02 00:55:05.906166 | orchestrator | Tuesday 02 September 2025 00:53:26 +0000 (0:00:08.825) 0:10:01.926 ***** 2025-09-02 00:55:05.906170 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 00:55:05.906175 | orchestrator | 2025-09-02 00:55:05.906180 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-02 00:55:05.906187 | orchestrator | Tuesday 02 September 2025 00:53:30 +0000 (0:00:03.563) 0:10:05.489 ***** 2025-09-02 00:55:05.906192 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.906197 | orchestrator | 2025-09-02 00:55:05.906202 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-02 00:55:05.906207 | orchestrator | Tuesday 02 September 2025 00:53:31 +0000 (0:00:00.838) 0:10:06.328 ***** 2025-09-02 00:55:05.906212 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-02 00:55:05.906217 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-02 00:55:05.906222 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-02 00:55:05.906226 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-02 00:55:05.906231 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-02 00:55:05.906236 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-02 00:55:05.906241 | orchestrator | 2025-09-02 00:55:05.906246 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-02 00:55:05.906251 | orchestrator | Tuesday 02 September 2025 00:53:32 +0000 (0:00:01.088) 0:10:07.416 ***** 2025-09-02 00:55:05.906255 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.906260 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-02 00:55:05.906265 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-02 00:55:05.906270 | orchestrator | 2025-09-02 00:55:05.906275 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-02 00:55:05.906280 | orchestrator | Tuesday 02 September 2025 00:53:34 +0000 (0:00:02.195) 0:10:09.611 ***** 2025-09-02 00:55:05.906284 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-02 00:55:05.906289 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-02 00:55:05.906294 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.906299 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-02 00:55:05.906304 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-02 00:55:05.906309 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-02 00:55:05.906314 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.906319 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-02 00:55:05.906323 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.906328 | orchestrator | 2025-09-02 00:55:05.906333 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-02 00:55:05.906338 | orchestrator | Tuesday 02 September 2025 00:53:35 +0000 (0:00:01.154) 0:10:10.766 ***** 2025-09-02 00:55:05.906343 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.906348 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.906353 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.906358 | orchestrator | 2025-09-02 00:55:05.906362 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-02 00:55:05.906369 | orchestrator | Tuesday 02 September 2025 00:53:38 +0000 (0:00:02.662) 0:10:13.428 ***** 2025-09-02 00:55:05.906375 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.906379 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.906388 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.906393 | orchestrator | 2025-09-02 00:55:05.906398 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-02 00:55:05.906403 | orchestrator | Tuesday 02 September 2025 00:53:38 +0000 (0:00:00.675) 0:10:14.104 ***** 2025-09-02 00:55:05.906408 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.906413 | orchestrator | 2025-09-02 00:55:05.906418 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-02 00:55:05.906441 | orchestrator | Tuesday 02 September 2025 00:53:39 +0000 (0:00:00.558) 0:10:14.663 ***** 2025-09-02 00:55:05.906447 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.906452 | orchestrator | 2025-09-02 00:55:05.906457 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-02 00:55:05.906462 | orchestrator | Tuesday 02 September 2025 00:53:40 +0000 (0:00:00.765) 0:10:15.428 ***** 2025-09-02 00:55:05.906467 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.906471 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.906476 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.906481 | orchestrator | 2025-09-02 00:55:05.906486 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-02 00:55:05.906491 | orchestrator | Tuesday 02 September 2025 00:53:41 +0000 (0:00:01.334) 0:10:16.763 ***** 2025-09-02 00:55:05.906496 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.906501 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.906506 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.906510 | orchestrator | 2025-09-02 00:55:05.906515 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-02 00:55:05.906520 | orchestrator | Tuesday 02 September 2025 00:53:42 +0000 (0:00:01.185) 0:10:17.948 ***** 2025-09-02 00:55:05.906525 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.906530 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.906535 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.906540 | orchestrator | 2025-09-02 00:55:05.906545 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-02 00:55:05.906549 | orchestrator | Tuesday 02 September 2025 00:53:44 +0000 (0:00:01.707) 0:10:19.656 ***** 2025-09-02 00:55:05.906554 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.906559 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.906564 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.906569 | orchestrator | 2025-09-02 00:55:05.906574 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-02 00:55:05.906579 | orchestrator | Tuesday 02 September 2025 00:53:46 +0000 (0:00:02.240) 0:10:21.897 ***** 2025-09-02 00:55:05.906586 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.906591 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.906596 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.906601 | orchestrator | 2025-09-02 00:55:05.906606 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-02 00:55:05.906611 | orchestrator | Tuesday 02 September 2025 00:53:47 +0000 (0:00:01.249) 0:10:23.146 ***** 2025-09-02 00:55:05.906616 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.906621 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.906626 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.906631 | orchestrator | 2025-09-02 00:55:05.906636 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-02 00:55:05.906641 | orchestrator | Tuesday 02 September 2025 00:53:48 +0000 (0:00:00.962) 0:10:24.109 ***** 2025-09-02 00:55:05.906645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.906650 | orchestrator | 2025-09-02 00:55:05.906655 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-02 00:55:05.906663 | orchestrator | Tuesday 02 September 2025 00:53:49 +0000 (0:00:00.535) 0:10:24.644 ***** 2025-09-02 00:55:05.906668 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.906673 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.906678 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.906683 | orchestrator | 2025-09-02 00:55:05.906688 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-02 00:55:05.906693 | orchestrator | Tuesday 02 September 2025 00:53:49 +0000 (0:00:00.338) 0:10:24.982 ***** 2025-09-02 00:55:05.906698 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.906703 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.906707 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.906712 | orchestrator | 2025-09-02 00:55:05.906717 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-02 00:55:05.906722 | orchestrator | Tuesday 02 September 2025 00:53:51 +0000 (0:00:01.536) 0:10:26.518 ***** 2025-09-02 00:55:05.906727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.906732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.906737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.906742 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.906747 | orchestrator | 2025-09-02 00:55:05.906752 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-02 00:55:05.906757 | orchestrator | Tuesday 02 September 2025 00:53:51 +0000 (0:00:00.665) 0:10:27.184 ***** 2025-09-02 00:55:05.906761 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.906766 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.906771 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.906776 | orchestrator | 2025-09-02 00:55:05.906781 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-02 00:55:05.906786 | orchestrator | 2025-09-02 00:55:05.906791 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-02 00:55:05.906796 | orchestrator | Tuesday 02 September 2025 00:53:52 +0000 (0:00:00.553) 0:10:27.738 ***** 2025-09-02 00:55:05.906803 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.906808 | orchestrator | 2025-09-02 00:55:05.906813 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-02 00:55:05.906818 | orchestrator | Tuesday 02 September 2025 00:53:53 +0000 (0:00:00.747) 0:10:28.486 ***** 2025-09-02 00:55:05.906823 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.906828 | orchestrator | 2025-09-02 00:55:05.906833 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-02 00:55:05.906838 | orchestrator | Tuesday 02 September 2025 00:53:53 +0000 (0:00:00.521) 0:10:29.007 ***** 2025-09-02 00:55:05.906842 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.906847 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.906852 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.906857 | orchestrator | 2025-09-02 00:55:05.906862 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-02 00:55:05.906867 | orchestrator | Tuesday 02 September 2025 00:53:54 +0000 (0:00:00.551) 0:10:29.558 ***** 2025-09-02 00:55:05.906872 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.906877 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.906882 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.906887 | orchestrator | 2025-09-02 00:55:05.906891 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-02 00:55:05.906896 | orchestrator | Tuesday 02 September 2025 00:53:54 +0000 (0:00:00.725) 0:10:30.284 ***** 2025-09-02 00:55:05.906901 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.906906 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.906911 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.906916 | orchestrator | 2025-09-02 00:55:05.906924 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-02 00:55:05.906929 | orchestrator | Tuesday 02 September 2025 00:53:55 +0000 (0:00:00.737) 0:10:31.022 ***** 2025-09-02 00:55:05.906934 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.906939 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.906944 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.906948 | orchestrator | 2025-09-02 00:55:05.906953 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-02 00:55:05.906958 | orchestrator | Tuesday 02 September 2025 00:53:56 +0000 (0:00:00.734) 0:10:31.757 ***** 2025-09-02 00:55:05.906963 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.906968 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.906973 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.906978 | orchestrator | 2025-09-02 00:55:05.906983 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-02 00:55:05.906988 | orchestrator | Tuesday 02 September 2025 00:53:57 +0000 (0:00:00.592) 0:10:32.349 ***** 2025-09-02 00:55:05.906993 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.906997 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907004 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907009 | orchestrator | 2025-09-02 00:55:05.907014 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-02 00:55:05.907018 | orchestrator | Tuesday 02 September 2025 00:53:57 +0000 (0:00:00.345) 0:10:32.695 ***** 2025-09-02 00:55:05.907023 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907028 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907032 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907037 | orchestrator | 2025-09-02 00:55:05.907041 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-02 00:55:05.907046 | orchestrator | Tuesday 02 September 2025 00:53:57 +0000 (0:00:00.327) 0:10:33.023 ***** 2025-09-02 00:55:05.907051 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.907055 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.907060 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.907065 | orchestrator | 2025-09-02 00:55:05.907069 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-02 00:55:05.907074 | orchestrator | Tuesday 02 September 2025 00:53:58 +0000 (0:00:00.728) 0:10:33.751 ***** 2025-09-02 00:55:05.907079 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.907083 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.907088 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.907092 | orchestrator | 2025-09-02 00:55:05.907097 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-02 00:55:05.907102 | orchestrator | Tuesday 02 September 2025 00:53:59 +0000 (0:00:00.985) 0:10:34.736 ***** 2025-09-02 00:55:05.907106 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907111 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907116 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907120 | orchestrator | 2025-09-02 00:55:05.907125 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-02 00:55:05.907129 | orchestrator | Tuesday 02 September 2025 00:53:59 +0000 (0:00:00.309) 0:10:35.045 ***** 2025-09-02 00:55:05.907134 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907139 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907143 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907148 | orchestrator | 2025-09-02 00:55:05.907153 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-02 00:55:05.907157 | orchestrator | Tuesday 02 September 2025 00:54:00 +0000 (0:00:00.319) 0:10:35.365 ***** 2025-09-02 00:55:05.907162 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.907166 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.907171 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.907176 | orchestrator | 2025-09-02 00:55:05.907180 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-02 00:55:05.907188 | orchestrator | Tuesday 02 September 2025 00:54:00 +0000 (0:00:00.358) 0:10:35.723 ***** 2025-09-02 00:55:05.907193 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.907198 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.907202 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.907207 | orchestrator | 2025-09-02 00:55:05.907212 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-02 00:55:05.907216 | orchestrator | Tuesday 02 September 2025 00:54:01 +0000 (0:00:00.628) 0:10:36.352 ***** 2025-09-02 00:55:05.907221 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.907227 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.907232 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.907237 | orchestrator | 2025-09-02 00:55:05.907241 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-02 00:55:05.907246 | orchestrator | Tuesday 02 September 2025 00:54:01 +0000 (0:00:00.340) 0:10:36.692 ***** 2025-09-02 00:55:05.907251 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907255 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907260 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907265 | orchestrator | 2025-09-02 00:55:05.907269 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-02 00:55:05.907274 | orchestrator | Tuesday 02 September 2025 00:54:01 +0000 (0:00:00.346) 0:10:37.038 ***** 2025-09-02 00:55:05.907279 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907283 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907288 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907292 | orchestrator | 2025-09-02 00:55:05.907297 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-02 00:55:05.907302 | orchestrator | Tuesday 02 September 2025 00:54:02 +0000 (0:00:00.310) 0:10:37.349 ***** 2025-09-02 00:55:05.907306 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907311 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907315 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907320 | orchestrator | 2025-09-02 00:55:05.907324 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-02 00:55:05.907329 | orchestrator | Tuesday 02 September 2025 00:54:02 +0000 (0:00:00.595) 0:10:37.945 ***** 2025-09-02 00:55:05.907334 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.907338 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.907343 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.907348 | orchestrator | 2025-09-02 00:55:05.907352 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-02 00:55:05.907357 | orchestrator | Tuesday 02 September 2025 00:54:02 +0000 (0:00:00.342) 0:10:38.287 ***** 2025-09-02 00:55:05.907362 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.907366 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.907371 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.907375 | orchestrator | 2025-09-02 00:55:05.907380 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-02 00:55:05.907385 | orchestrator | Tuesday 02 September 2025 00:54:03 +0000 (0:00:00.549) 0:10:38.837 ***** 2025-09-02 00:55:05.907389 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.907394 | orchestrator | 2025-09-02 00:55:05.907398 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-02 00:55:05.907403 | orchestrator | Tuesday 02 September 2025 00:54:04 +0000 (0:00:00.865) 0:10:39.702 ***** 2025-09-02 00:55:05.907408 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.907412 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-02 00:55:05.907419 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-02 00:55:05.907433 | orchestrator | 2025-09-02 00:55:05.907438 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-02 00:55:05.907443 | orchestrator | Tuesday 02 September 2025 00:54:06 +0000 (0:00:02.341) 0:10:42.043 ***** 2025-09-02 00:55:05.907450 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-02 00:55:05.907455 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-02 00:55:05.907460 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.907464 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-02 00:55:05.907469 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-02 00:55:05.907473 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.907478 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-02 00:55:05.907483 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-02 00:55:05.907487 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.907492 | orchestrator | 2025-09-02 00:55:05.907496 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-02 00:55:05.907501 | orchestrator | Tuesday 02 September 2025 00:54:07 +0000 (0:00:01.242) 0:10:43.286 ***** 2025-09-02 00:55:05.907505 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907510 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907514 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907519 | orchestrator | 2025-09-02 00:55:05.907524 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-02 00:55:05.907528 | orchestrator | Tuesday 02 September 2025 00:54:08 +0000 (0:00:00.321) 0:10:43.608 ***** 2025-09-02 00:55:05.907533 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.907537 | orchestrator | 2025-09-02 00:55:05.907542 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-02 00:55:05.907547 | orchestrator | Tuesday 02 September 2025 00:54:09 +0000 (0:00:00.875) 0:10:44.484 ***** 2025-09-02 00:55:05.907551 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.907556 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.907561 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.907566 | orchestrator | 2025-09-02 00:55:05.907570 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-02 00:55:05.907575 | orchestrator | Tuesday 02 September 2025 00:54:10 +0000 (0:00:00.943) 0:10:45.427 ***** 2025-09-02 00:55:05.907582 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.907586 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-02 00:55:05.907591 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.907596 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-02 00:55:05.907601 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.907605 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-02 00:55:05.907610 | orchestrator | 2025-09-02 00:55:05.907615 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-02 00:55:05.907619 | orchestrator | Tuesday 02 September 2025 00:54:14 +0000 (0:00:04.440) 0:10:49.867 ***** 2025-09-02 00:55:05.907624 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.907628 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-02 00:55:05.907633 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.907641 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-02 00:55:05.907645 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:55:05.907650 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-02 00:55:05.907654 | orchestrator | 2025-09-02 00:55:05.907659 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-02 00:55:05.907664 | orchestrator | Tuesday 02 September 2025 00:54:17 +0000 (0:00:03.222) 0:10:53.090 ***** 2025-09-02 00:55:05.907668 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-02 00:55:05.907673 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-02 00:55:05.907677 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.907682 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.907687 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-02 00:55:05.907691 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.907696 | orchestrator | 2025-09-02 00:55:05.907717 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-02 00:55:05.907722 | orchestrator | Tuesday 02 September 2025 00:54:19 +0000 (0:00:01.280) 0:10:54.370 ***** 2025-09-02 00:55:05.907727 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-02 00:55:05.907732 | orchestrator | 2025-09-02 00:55:05.907736 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-02 00:55:05.907744 | orchestrator | Tuesday 02 September 2025 00:54:19 +0000 (0:00:00.242) 0:10:54.612 ***** 2025-09-02 00:55:05.907749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907772 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907777 | orchestrator | 2025-09-02 00:55:05.907782 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-02 00:55:05.907786 | orchestrator | Tuesday 02 September 2025 00:54:19 +0000 (0:00:00.594) 0:10:55.207 ***** 2025-09-02 00:55:05.907791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-02 00:55:05.907814 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907819 | orchestrator | 2025-09-02 00:55:05.907823 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-02 00:55:05.907828 | orchestrator | Tuesday 02 September 2025 00:54:20 +0000 (0:00:00.575) 0:10:55.783 ***** 2025-09-02 00:55:05.907832 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-02 00:55:05.907843 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-02 00:55:05.907848 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-02 00:55:05.907852 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-02 00:55:05.907857 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-02 00:55:05.907862 | orchestrator | 2025-09-02 00:55:05.907866 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-02 00:55:05.907871 | orchestrator | Tuesday 02 September 2025 00:54:52 +0000 (0:00:31.537) 0:11:27.320 ***** 2025-09-02 00:55:05.907876 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907880 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907885 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907889 | orchestrator | 2025-09-02 00:55:05.907894 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-02 00:55:05.907898 | orchestrator | Tuesday 02 September 2025 00:54:52 +0000 (0:00:00.300) 0:11:27.620 ***** 2025-09-02 00:55:05.907903 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.907908 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.907912 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.907917 | orchestrator | 2025-09-02 00:55:05.907921 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-02 00:55:05.907926 | orchestrator | Tuesday 02 September 2025 00:54:52 +0000 (0:00:00.610) 0:11:28.230 ***** 2025-09-02 00:55:05.907931 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.907935 | orchestrator | 2025-09-02 00:55:05.907940 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-02 00:55:05.907944 | orchestrator | Tuesday 02 September 2025 00:54:53 +0000 (0:00:00.592) 0:11:28.823 ***** 2025-09-02 00:55:05.907949 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.907954 | orchestrator | 2025-09-02 00:55:05.907958 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-02 00:55:05.907963 | orchestrator | Tuesday 02 September 2025 00:54:54 +0000 (0:00:00.804) 0:11:29.627 ***** 2025-09-02 00:55:05.907967 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.907972 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.907977 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.907981 | orchestrator | 2025-09-02 00:55:05.907988 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-02 00:55:05.907993 | orchestrator | Tuesday 02 September 2025 00:54:55 +0000 (0:00:01.343) 0:11:30.971 ***** 2025-09-02 00:55:05.907997 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.908002 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.908006 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.908011 | orchestrator | 2025-09-02 00:55:05.908016 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-02 00:55:05.908020 | orchestrator | Tuesday 02 September 2025 00:54:56 +0000 (0:00:01.168) 0:11:32.140 ***** 2025-09-02 00:55:05.908025 | orchestrator | changed: [testbed-node-3] 2025-09-02 00:55:05.908030 | orchestrator | changed: [testbed-node-4] 2025-09-02 00:55:05.908034 | orchestrator | changed: [testbed-node-5] 2025-09-02 00:55:05.908039 | orchestrator | 2025-09-02 00:55:05.908044 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-02 00:55:05.908048 | orchestrator | Tuesday 02 September 2025 00:54:58 +0000 (0:00:01.800) 0:11:33.941 ***** 2025-09-02 00:55:05.908056 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.908060 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.908065 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-02 00:55:05.908070 | orchestrator | 2025-09-02 00:55:05.908074 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-02 00:55:05.908079 | orchestrator | Tuesday 02 September 2025 00:55:01 +0000 (0:00:02.565) 0:11:36.506 ***** 2025-09-02 00:55:05.908084 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.908088 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.908093 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.908098 | orchestrator | 2025-09-02 00:55:05.908102 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-02 00:55:05.908107 | orchestrator | Tuesday 02 September 2025 00:55:01 +0000 (0:00:00.380) 0:11:36.886 ***** 2025-09-02 00:55:05.908112 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:55:05.908116 | orchestrator | 2025-09-02 00:55:05.908121 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-02 00:55:05.908126 | orchestrator | Tuesday 02 September 2025 00:55:02 +0000 (0:00:00.829) 0:11:37.716 ***** 2025-09-02 00:55:05.908130 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.908135 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.908140 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.908144 | orchestrator | 2025-09-02 00:55:05.908149 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-02 00:55:05.908156 | orchestrator | Tuesday 02 September 2025 00:55:02 +0000 (0:00:00.333) 0:11:38.049 ***** 2025-09-02 00:55:05.908160 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.908165 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:55:05.908170 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:55:05.908174 | orchestrator | 2025-09-02 00:55:05.908179 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-02 00:55:05.908184 | orchestrator | Tuesday 02 September 2025 00:55:03 +0000 (0:00:00.348) 0:11:38.398 ***** 2025-09-02 00:55:05.908188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:55:05.908193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:55:05.908198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:55:05.908202 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:55:05.908207 | orchestrator | 2025-09-02 00:55:05.908212 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-02 00:55:05.908216 | orchestrator | Tuesday 02 September 2025 00:55:04 +0000 (0:00:01.169) 0:11:39.567 ***** 2025-09-02 00:55:05.908221 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:55:05.908225 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:55:05.908230 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:55:05.908235 | orchestrator | 2025-09-02 00:55:05.908239 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:55:05.908244 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-02 00:55:05.908249 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-02 00:55:05.908254 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-02 00:55:05.908258 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-02 00:55:05.908267 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-02 00:55:05.908272 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-02 00:55:05.908277 | orchestrator | 2025-09-02 00:55:05.908281 | orchestrator | 2025-09-02 00:55:05.908286 | orchestrator | 2025-09-02 00:55:05.908291 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:55:05.908295 | orchestrator | Tuesday 02 September 2025 00:55:04 +0000 (0:00:00.256) 0:11:39.824 ***** 2025-09-02 00:55:05.908302 | orchestrator | =============================================================================== 2025-09-02 00:55:05.908307 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 65.72s 2025-09-02 00:55:05.908312 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.10s 2025-09-02 00:55:05.908316 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.54s 2025-09-02 00:55:05.908321 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.72s 2025-09-02 00:55:05.908326 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.02s 2025-09-02 00:55:05.908330 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.71s 2025-09-02 00:55:05.908335 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.06s 2025-09-02 00:55:05.908339 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.74s 2025-09-02 00:55:05.908344 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.91s 2025-09-02 00:55:05.908349 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.83s 2025-09-02 00:55:05.908353 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.13s 2025-09-02 00:55:05.908358 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.52s 2025-09-02 00:55:05.908362 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.94s 2025-09-02 00:55:05.908367 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.44s 2025-09-02 00:55:05.908372 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.03s 2025-09-02 00:55:05.908376 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.93s 2025-09-02 00:55:05.908381 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.91s 2025-09-02 00:55:05.908386 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.67s 2025-09-02 00:55:05.908390 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.56s 2025-09-02 00:55:05.908395 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.40s 2025-09-02 00:55:05.908399 | orchestrator | 2025-09-02 00:55:05 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:05.908404 | orchestrator | 2025-09-02 00:55:05 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:05.908409 | orchestrator | 2025-09-02 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:08.942263 | orchestrator | 2025-09-02 00:55:08 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:08.943165 | orchestrator | 2025-09-02 00:55:08 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:08.945217 | orchestrator | 2025-09-02 00:55:08 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:08.945563 | orchestrator | 2025-09-02 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:11.987925 | orchestrator | 2025-09-02 00:55:11 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:11.990383 | orchestrator | 2025-09-02 00:55:11 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:11.991573 | orchestrator | 2025-09-02 00:55:11 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:11.991797 | orchestrator | 2025-09-02 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:15.041025 | orchestrator | 2025-09-02 00:55:15 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:15.043647 | orchestrator | 2025-09-02 00:55:15 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:15.045781 | orchestrator | 2025-09-02 00:55:15 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:15.046303 | orchestrator | 2025-09-02 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:18.099155 | orchestrator | 2025-09-02 00:55:18 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:18.100551 | orchestrator | 2025-09-02 00:55:18 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:18.101737 | orchestrator | 2025-09-02 00:55:18 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:18.101761 | orchestrator | 2025-09-02 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:21.146093 | orchestrator | 2025-09-02 00:55:21 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:21.147933 | orchestrator | 2025-09-02 00:55:21 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:21.150235 | orchestrator | 2025-09-02 00:55:21 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:21.150343 | orchestrator | 2025-09-02 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:24.199227 | orchestrator | 2025-09-02 00:55:24 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:24.200841 | orchestrator | 2025-09-02 00:55:24 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:24.202154 | orchestrator | 2025-09-02 00:55:24 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:24.202177 | orchestrator | 2025-09-02 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:27.248500 | orchestrator | 2025-09-02 00:55:27 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:27.250093 | orchestrator | 2025-09-02 00:55:27 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:27.251483 | orchestrator | 2025-09-02 00:55:27 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:27.251585 | orchestrator | 2025-09-02 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:30.291569 | orchestrator | 2025-09-02 00:55:30 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:30.297298 | orchestrator | 2025-09-02 00:55:30 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:30.297347 | orchestrator | 2025-09-02 00:55:30 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:30.297360 | orchestrator | 2025-09-02 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:33.331364 | orchestrator | 2025-09-02 00:55:33 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:33.333504 | orchestrator | 2025-09-02 00:55:33 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:33.336271 | orchestrator | 2025-09-02 00:55:33 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:33.336585 | orchestrator | 2025-09-02 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:36.390655 | orchestrator | 2025-09-02 00:55:36 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:36.390995 | orchestrator | 2025-09-02 00:55:36 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:36.391941 | orchestrator | 2025-09-02 00:55:36 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:36.391964 | orchestrator | 2025-09-02 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:39.439591 | orchestrator | 2025-09-02 00:55:39 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:39.440683 | orchestrator | 2025-09-02 00:55:39 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state STARTED 2025-09-02 00:55:39.442758 | orchestrator | 2025-09-02 00:55:39 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:39.442886 | orchestrator | 2025-09-02 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:42.498972 | orchestrator | 2025-09-02 00:55:42 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:42.502935 | orchestrator | 2025-09-02 00:55:42 | INFO  | Task 439758c3-ba51-4ed5-9e1e-d8bed3efd194 is in state SUCCESS 2025-09-02 00:55:42.504150 | orchestrator | 2025-09-02 00:55:42.504183 | orchestrator | 2025-09-02 00:55:42.504195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:55:42.504207 | orchestrator | 2025-09-02 00:55:42.504218 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:55:42.504230 | orchestrator | Tuesday 02 September 2025 00:52:45 +0000 (0:00:00.303) 0:00:00.303 ***** 2025-09-02 00:55:42.504242 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:42.504253 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:55:42.504264 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:55:42.504275 | orchestrator | 2025-09-02 00:55:42.504286 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:55:42.504297 | orchestrator | Tuesday 02 September 2025 00:52:45 +0000 (0:00:00.314) 0:00:00.618 ***** 2025-09-02 00:55:42.504308 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-02 00:55:42.504319 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-02 00:55:42.504330 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-02 00:55:42.504341 | orchestrator | 2025-09-02 00:55:42.504351 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-02 00:55:42.504362 | orchestrator | 2025-09-02 00:55:42.504373 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-02 00:55:42.504384 | orchestrator | Tuesday 02 September 2025 00:52:46 +0000 (0:00:00.425) 0:00:01.043 ***** 2025-09-02 00:55:42.504395 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:42.504406 | orchestrator | 2025-09-02 00:55:42.504417 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-02 00:55:42.504428 | orchestrator | Tuesday 02 September 2025 00:52:46 +0000 (0:00:00.546) 0:00:01.589 ***** 2025-09-02 00:55:42.504439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-02 00:55:42.504472 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-02 00:55:42.504484 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-02 00:55:42.504517 | orchestrator | 2025-09-02 00:55:42.504528 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-02 00:55:42.504539 | orchestrator | Tuesday 02 September 2025 00:52:47 +0000 (0:00:00.742) 0:00:02.332 ***** 2025-09-02 00:55:42.504554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.504669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.504702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.504717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.504732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.504759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.504772 | orchestrator | 2025-09-02 00:55:42.504784 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-02 00:55:42.504795 | orchestrator | Tuesday 02 September 2025 00:52:49 +0000 (0:00:01.705) 0:00:04.038 ***** 2025-09-02 00:55:42.504806 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:42.504817 | orchestrator | 2025-09-02 00:55:42.504827 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-02 00:55:42.504838 | orchestrator | Tuesday 02 September 2025 00:52:49 +0000 (0:00:00.550) 0:00:04.588 ***** 2025-09-02 00:55:42.504933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.504952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.504972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.504990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.505010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.505024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.505041 | orchestrator | 2025-09-02 00:55:42.505052 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-02 00:55:42.505063 | orchestrator | Tuesday 02 September 2025 00:52:52 +0000 (0:00:02.861) 0:00:07.450 ***** 2025-09-02 00:55:42.505075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-02 00:55:42.505091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-02 00:55:42.505104 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:42.505116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-02 00:55:42.505135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-02 00:55:42.505154 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:42.505166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-02 00:55:42.505178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-02 00:55:42.505190 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:42.505201 | orchestrator | 2025-09-02 00:55:42.505223 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-02 00:55:42.505235 | orchestrator | Tuesday 02 September 2025 00:52:53 +0000 (0:00:01.022) 0:00:08.472 ***** 2025-09-02 00:55:42.505246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-02 00:55:42.505266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-02 00:55:42.505285 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:42.505297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-02 00:55:42.505309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-02 00:55:42.505320 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:42.505337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-02 00:55:42.505357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-02 00:55:42.505375 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:42.505386 | orchestrator | 2025-09-02 00:55:42.505397 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-02 00:55:42.505408 | orchestrator | Tuesday 02 September 2025 00:52:55 +0000 (0:00:01.373) 0:00:09.845 ***** 2025-09-02 00:55:42.505419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.505431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.505447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.505486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.505506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.505519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.505531 | orchestrator | 2025-09-02 00:55:42.505542 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-02 00:55:42.505553 | orchestrator | Tuesday 02 September 2025 00:52:58 +0000 (0:00:02.842) 0:00:12.687 ***** 2025-09-02 00:55:42.505565 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:42.505578 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:42.505591 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:42.505604 | orchestrator | 2025-09-02 00:55:42.505617 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-02 00:55:42.505630 | orchestrator | Tuesday 02 September 2025 00:53:01 +0000 (0:00:03.435) 0:00:16.123 ***** 2025-09-02 00:55:42.505642 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:42.505655 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:42.505672 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:42.505685 | orchestrator | 2025-09-02 00:55:42.505698 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-02 00:55:42.505710 | orchestrator | Tuesday 02 September 2025 00:53:03 +0000 (0:00:02.207) 0:00:18.330 ***** 2025-09-02 00:55:42.505723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.505750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.505764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-02 00:55:42.505777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.505796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.505823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-02 00:55:42.505837 | orchestrator | 2025-09-02 00:55:42.505849 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-02 00:55:42.505862 | orchestrator | Tuesday 02 September 2025 00:53:06 +0000 (0:00:02.435) 0:00:20.765 ***** 2025-09-02 00:55:42.505874 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:42.505887 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:55:42.505899 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:55:42.505911 | orchestrator | 2025-09-02 00:55:42.505924 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-02 00:55:42.505937 | orchestrator | Tuesday 02 September 2025 00:53:06 +0000 (0:00:00.368) 0:00:21.133 ***** 2025-09-02 00:55:42.505950 | orchestrator | 2025-09-02 00:55:42.505961 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-02 00:55:42.505972 | orchestrator | Tuesday 02 September 2025 00:53:06 +0000 (0:00:00.065) 0:00:21.199 ***** 2025-09-02 00:55:42.505983 | orchestrator | 2025-09-02 00:55:42.505994 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-02 00:55:42.506005 | orchestrator | Tuesday 02 September 2025 00:53:06 +0000 (0:00:00.074) 0:00:21.273 ***** 2025-09-02 00:55:42.506063 | orchestrator | 2025-09-02 00:55:42.506075 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-02 00:55:42.506086 | orchestrator | Tuesday 02 September 2025 00:53:06 +0000 (0:00:00.069) 0:00:21.343 ***** 2025-09-02 00:55:42.506097 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:42.506108 | orchestrator | 2025-09-02 00:55:42.506119 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-02 00:55:42.506130 | orchestrator | Tuesday 02 September 2025 00:53:06 +0000 (0:00:00.248) 0:00:21.591 ***** 2025-09-02 00:55:42.506141 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:55:42.506152 | orchestrator | 2025-09-02 00:55:42.506163 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-02 00:55:42.506174 | orchestrator | Tuesday 02 September 2025 00:53:07 +0000 (0:00:00.744) 0:00:22.336 ***** 2025-09-02 00:55:42.506185 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:42.506196 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:42.506207 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:42.506217 | orchestrator | 2025-09-02 00:55:42.506228 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-02 00:55:42.506239 | orchestrator | Tuesday 02 September 2025 00:54:07 +0000 (0:00:59.765) 0:01:22.101 ***** 2025-09-02 00:55:42.506250 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:42.506261 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:55:42.506278 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:55:42.506289 | orchestrator | 2025-09-02 00:55:42.506300 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-02 00:55:42.506311 | orchestrator | Tuesday 02 September 2025 00:55:30 +0000 (0:01:23.309) 0:02:45.411 ***** 2025-09-02 00:55:42.506322 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:55:42.506333 | orchestrator | 2025-09-02 00:55:42.506344 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-02 00:55:42.506354 | orchestrator | Tuesday 02 September 2025 00:55:31 +0000 (0:00:00.527) 0:02:45.939 ***** 2025-09-02 00:55:42.506365 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:42.506376 | orchestrator | 2025-09-02 00:55:42.506387 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-02 00:55:42.506398 | orchestrator | Tuesday 02 September 2025 00:55:34 +0000 (0:00:02.987) 0:02:48.927 ***** 2025-09-02 00:55:42.506413 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:55:42.506425 | orchestrator | 2025-09-02 00:55:42.506435 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-02 00:55:42.506446 | orchestrator | Tuesday 02 September 2025 00:55:36 +0000 (0:00:02.277) 0:02:51.204 ***** 2025-09-02 00:55:42.506487 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:42.506499 | orchestrator | 2025-09-02 00:55:42.506509 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-02 00:55:42.506520 | orchestrator | Tuesday 02 September 2025 00:55:39 +0000 (0:00:02.723) 0:02:53.927 ***** 2025-09-02 00:55:42.506531 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:55:42.506542 | orchestrator | 2025-09-02 00:55:42.506552 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:55:42.506564 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-02 00:55:42.506576 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-02 00:55:42.506587 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-02 00:55:42.506598 | orchestrator | 2025-09-02 00:55:42.506609 | orchestrator | 2025-09-02 00:55:42.506620 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:55:42.506637 | orchestrator | Tuesday 02 September 2025 00:55:41 +0000 (0:00:02.421) 0:02:56.349 ***** 2025-09-02 00:55:42.506648 | orchestrator | =============================================================================== 2025-09-02 00:55:42.506659 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.31s 2025-09-02 00:55:42.506670 | orchestrator | opensearch : Restart opensearch container ------------------------------ 59.77s 2025-09-02 00:55:42.506681 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.44s 2025-09-02 00:55:42.506692 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.99s 2025-09-02 00:55:42.506702 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.86s 2025-09-02 00:55:42.506713 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.84s 2025-09-02 00:55:42.506724 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.72s 2025-09-02 00:55:42.506735 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.44s 2025-09-02 00:55:42.506746 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.42s 2025-09-02 00:55:42.506757 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.28s 2025-09-02 00:55:42.506767 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.21s 2025-09-02 00:55:42.506778 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2025-09-02 00:55:42.506800 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.37s 2025-09-02 00:55:42.506812 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.02s 2025-09-02 00:55:42.506822 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.74s 2025-09-02 00:55:42.506833 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.74s 2025-09-02 00:55:42.506844 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-09-02 00:55:42.506855 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-09-02 00:55:42.506866 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-02 00:55:42.506877 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-02 00:55:42.506888 | orchestrator | 2025-09-02 00:55:42 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:42.506899 | orchestrator | 2025-09-02 00:55:42 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:45.555301 | orchestrator | 2025-09-02 00:55:45 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:45.556875 | orchestrator | 2025-09-02 00:55:45 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:45.556903 | orchestrator | 2025-09-02 00:55:45 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:48.597662 | orchestrator | 2025-09-02 00:55:48 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:48.598491 | orchestrator | 2025-09-02 00:55:48 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:48.598527 | orchestrator | 2025-09-02 00:55:48 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:51.651200 | orchestrator | 2025-09-02 00:55:51 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:51.653569 | orchestrator | 2025-09-02 00:55:51 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:51.653610 | orchestrator | 2025-09-02 00:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:54.707190 | orchestrator | 2025-09-02 00:55:54 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:54.708641 | orchestrator | 2025-09-02 00:55:54 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:54.709067 | orchestrator | 2025-09-02 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:55:57.755916 | orchestrator | 2025-09-02 00:55:57 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:55:57.758288 | orchestrator | 2025-09-02 00:55:57 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:55:57.758666 | orchestrator | 2025-09-02 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:00.805658 | orchestrator | 2025-09-02 00:56:00 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:00.805996 | orchestrator | 2025-09-02 00:56:00 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:56:00.806064 | orchestrator | 2025-09-02 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:03.865134 | orchestrator | 2025-09-02 00:56:03 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:03.865296 | orchestrator | 2025-09-02 00:56:03 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:56:03.865326 | orchestrator | 2025-09-02 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:06.912047 | orchestrator | 2025-09-02 00:56:06 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:06.912893 | orchestrator | 2025-09-02 00:56:06 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state STARTED 2025-09-02 00:56:06.912922 | orchestrator | 2025-09-02 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:09.975282 | orchestrator | 2025-09-02 00:56:09 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:09.976869 | orchestrator | 2025-09-02 00:56:09 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:09.979010 | orchestrator | 2025-09-02 00:56:09 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:09.983331 | orchestrator | 2025-09-02 00:56:09 | INFO  | Task 0f706232-6eea-4c32-b16d-d9cb87e0eea6 is in state SUCCESS 2025-09-02 00:56:09.986051 | orchestrator | 2025-09-02 00:56:09.986082 | orchestrator | 2025-09-02 00:56:09.986095 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-02 00:56:09.986107 | orchestrator | 2025-09-02 00:56:09.986119 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-02 00:56:09.986132 | orchestrator | Tuesday 02 September 2025 00:52:45 +0000 (0:00:00.114) 0:00:00.114 ***** 2025-09-02 00:56:09.986144 | orchestrator | ok: [localhost] => { 2025-09-02 00:56:09.986157 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-02 00:56:09.986169 | orchestrator | } 2025-09-02 00:56:09.986182 | orchestrator | 2025-09-02 00:56:09.986194 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-02 00:56:09.986206 | orchestrator | Tuesday 02 September 2025 00:52:45 +0000 (0:00:00.058) 0:00:00.172 ***** 2025-09-02 00:56:09.986218 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-02 00:56:09.986230 | orchestrator | ...ignoring 2025-09-02 00:56:09.986556 | orchestrator | 2025-09-02 00:56:09.986568 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-02 00:56:09.986580 | orchestrator | Tuesday 02 September 2025 00:52:48 +0000 (0:00:02.883) 0:00:03.055 ***** 2025-09-02 00:56:09.986591 | orchestrator | skipping: [localhost] 2025-09-02 00:56:09.986602 | orchestrator | 2025-09-02 00:56:09.986613 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-02 00:56:09.986624 | orchestrator | Tuesday 02 September 2025 00:52:48 +0000 (0:00:00.064) 0:00:03.120 ***** 2025-09-02 00:56:09.986635 | orchestrator | ok: [localhost] 2025-09-02 00:56:09.986646 | orchestrator | 2025-09-02 00:56:09.986658 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:56:09.986669 | orchestrator | 2025-09-02 00:56:09.986680 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:56:09.986691 | orchestrator | Tuesday 02 September 2025 00:52:48 +0000 (0:00:00.157) 0:00:03.277 ***** 2025-09-02 00:56:09.986702 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.986713 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:56:09.986724 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:56:09.986736 | orchestrator | 2025-09-02 00:56:09.986747 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:56:09.986758 | orchestrator | Tuesday 02 September 2025 00:52:49 +0000 (0:00:00.372) 0:00:03.650 ***** 2025-09-02 00:56:09.986789 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-02 00:56:09.986801 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-02 00:56:09.986812 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-02 00:56:09.986823 | orchestrator | 2025-09-02 00:56:09.986834 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-02 00:56:09.986868 | orchestrator | 2025-09-02 00:56:09.986891 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-02 00:56:09.986903 | orchestrator | Tuesday 02 September 2025 00:52:49 +0000 (0:00:00.614) 0:00:04.264 ***** 2025-09-02 00:56:09.986913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-02 00:56:09.986924 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-02 00:56:09.986935 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-02 00:56:09.986946 | orchestrator | 2025-09-02 00:56:09.986957 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-02 00:56:09.986967 | orchestrator | Tuesday 02 September 2025 00:52:50 +0000 (0:00:00.485) 0:00:04.750 ***** 2025-09-02 00:56:09.986978 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:56:09.986990 | orchestrator | 2025-09-02 00:56:09.987001 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-02 00:56:09.987011 | orchestrator | Tuesday 02 September 2025 00:52:50 +0000 (0:00:00.662) 0:00:05.412 ***** 2025-09-02 00:56:09.987040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-02 00:56:09.987062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-02 00:56:09.987085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-02 00:56:09.987101 | orchestrator | 2025-09-02 00:56:09.987123 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-02 00:56:09.987136 | orchestrator | Tuesday 02 September 2025 00:52:53 +0000 (0:00:03.078) 0:00:08.490 ***** 2025-09-02 00:56:09.987149 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.987162 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.987175 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.987188 | orchestrator | 2025-09-02 00:56:09.987202 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-02 00:56:09.987215 | orchestrator | Tuesday 02 September 2025 00:52:54 +0000 (0:00:00.859) 0:00:09.350 ***** 2025-09-02 00:56:09.987228 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.987240 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.987253 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.987266 | orchestrator | 2025-09-02 00:56:09.987279 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-02 00:56:09.987292 | orchestrator | Tuesday 02 September 2025 00:52:56 +0000 (0:00:01.631) 0:00:10.983 ***** 2025-09-02 00:56:09.987312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-02 00:56:09.987342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-02 00:56:09.987364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-02 00:56:09.987384 | orchestrator | 2025-09-02 00:56:09.987396 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-02 00:56:09.987410 | orchestrator | Tuesday 02 September 2025 00:53:01 +0000 (0:00:04.902) 0:00:15.885 ***** 2025-09-02 00:56:09.987423 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.987436 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.987448 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.987459 | orchestrator | 2025-09-02 00:56:09.987470 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-02 00:56:09.987502 | orchestrator | Tuesday 02 September 2025 00:53:02 +0000 (0:00:01.338) 0:00:17.223 ***** 2025-09-02 00:56:09.987513 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:56:09.987524 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.987534 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:56:09.987545 | orchestrator | 2025-09-02 00:56:09.987556 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-02 00:56:09.987567 | orchestrator | Tuesday 02 September 2025 00:53:07 +0000 (0:00:05.082) 0:00:22.306 ***** 2025-09-02 00:56:09.987578 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:56:09.987589 | orchestrator | 2025-09-02 00:56:09.987600 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-02 00:56:09.987611 | orchestrator | Tuesday 02 September 2025 00:53:08 +0000 (0:00:00.650) 0:00:22.956 ***** 2025-09-02 00:56:09.987631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:56:09.987650 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.987668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:56:09.987680 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.987699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:56:09.987717 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.987728 | orchestrator | 2025-09-02 00:56:09.987739 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-02 00:56:09.987750 | orchestrator | Tuesday 02 September 2025 00:53:12 +0000 (0:00:04.094) 0:00:27.050 ***** 2025-09-02 00:56:09.987766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:56:09.987778 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.987795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:56:09.987813 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.987829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:56:09.987841 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.987853 | orchestrator | 2025-09-02 00:56:09.987863 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-02 00:56:09.987874 | orchestrator | Tuesday 02 September 2025 00:53:15 +0000 (0:00:03.421) 0:00:30.472 ***** 2025-09-02 00:56:09.987886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:56:09.987910 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.987930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:56:09.987942 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.987958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-02 00:56:09.987971 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.987982 | orchestrator | 2025-09-02 00:56:09.987992 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-02 00:56:09.988012 | orchestrator | Tuesday 02 September 2025 00:53:19 +0000 (0:00:03.315) 0:00:33.787 ***** 2025-09-02 00:56:09.988032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-02 00:56:09.988045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-02 00:56:09.988094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-02 00:56:09.988115 | orchestrator | 2025-09-02 00:56:09.988126 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-02 00:56:09.988137 | orchestrator | Tuesday 02 September 2025 00:53:22 +0000 (0:00:03.686) 0:00:37.474 ***** 2025-09-02 00:56:09.988148 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.988159 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:56:09.988170 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:56:09.988181 | orchestrator | 2025-09-02 00:56:09.988192 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-02 00:56:09.988203 | orchestrator | Tuesday 02 September 2025 00:53:23 +0000 (0:00:00.885) 0:00:38.359 ***** 2025-09-02 00:56:09.988215 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.988226 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:56:09.988237 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:56:09.988247 | orchestrator | 2025-09-02 00:56:09.988259 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-02 00:56:09.988270 | orchestrator | Tuesday 02 September 2025 00:53:24 +0000 (0:00:00.758) 0:00:39.118 ***** 2025-09-02 00:56:09.988281 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.988292 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:56:09.988303 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:56:09.988314 | orchestrator | 2025-09-02 00:56:09.988325 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-02 00:56:09.988340 | orchestrator | Tuesday 02 September 2025 00:53:24 +0000 (0:00:00.396) 0:00:39.514 ***** 2025-09-02 00:56:09.988352 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-02 00:56:09.988363 | orchestrator | ...ignoring 2025-09-02 00:56:09.988375 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-02 00:56:09.988386 | orchestrator | ...ignoring 2025-09-02 00:56:09.988397 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-02 00:56:09.988408 | orchestrator | ...ignoring 2025-09-02 00:56:09.988419 | orchestrator | 2025-09-02 00:56:09.988430 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-02 00:56:09.988441 | orchestrator | Tuesday 02 September 2025 00:53:35 +0000 (0:00:10.931) 0:00:50.445 ***** 2025-09-02 00:56:09.988452 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.988469 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:56:09.988496 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:56:09.988507 | orchestrator | 2025-09-02 00:56:09.988518 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-02 00:56:09.988530 | orchestrator | Tuesday 02 September 2025 00:53:36 +0000 (0:00:00.405) 0:00:50.851 ***** 2025-09-02 00:56:09.988541 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.988552 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.988563 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.988574 | orchestrator | 2025-09-02 00:56:09.988585 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-02 00:56:09.988596 | orchestrator | Tuesday 02 September 2025 00:53:36 +0000 (0:00:00.655) 0:00:51.506 ***** 2025-09-02 00:56:09.988607 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.988618 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.988628 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.988639 | orchestrator | 2025-09-02 00:56:09.988650 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-02 00:56:09.988661 | orchestrator | Tuesday 02 September 2025 00:53:37 +0000 (0:00:00.485) 0:00:51.992 ***** 2025-09-02 00:56:09.988672 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.988683 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.988694 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.988705 | orchestrator | 2025-09-02 00:56:09.988716 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-02 00:56:09.988727 | orchestrator | Tuesday 02 September 2025 00:53:37 +0000 (0:00:00.416) 0:00:52.409 ***** 2025-09-02 00:56:09.988738 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.988748 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:56:09.988759 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:56:09.988770 | orchestrator | 2025-09-02 00:56:09.988781 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-02 00:56:09.988792 | orchestrator | Tuesday 02 September 2025 00:53:38 +0000 (0:00:00.431) 0:00:52.841 ***** 2025-09-02 00:56:09.988809 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.988820 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.988831 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.988842 | orchestrator | 2025-09-02 00:56:09.988853 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-02 00:56:09.988864 | orchestrator | Tuesday 02 September 2025 00:53:39 +0000 (0:00:00.866) 0:00:53.708 ***** 2025-09-02 00:56:09.988875 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.988886 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.988897 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-02 00:56:09.988908 | orchestrator | 2025-09-02 00:56:09.988919 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-02 00:56:09.988930 | orchestrator | Tuesday 02 September 2025 00:53:39 +0000 (0:00:00.397) 0:00:54.106 ***** 2025-09-02 00:56:09.988941 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.988952 | orchestrator | 2025-09-02 00:56:09.988962 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-02 00:56:09.988973 | orchestrator | Tuesday 02 September 2025 00:53:49 +0000 (0:00:10.063) 0:01:04.169 ***** 2025-09-02 00:56:09.988984 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.988995 | orchestrator | 2025-09-02 00:56:09.989006 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-02 00:56:09.989017 | orchestrator | Tuesday 02 September 2025 00:53:49 +0000 (0:00:00.124) 0:01:04.294 ***** 2025-09-02 00:56:09.989028 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.989039 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.989050 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.989060 | orchestrator | 2025-09-02 00:56:09.989071 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-02 00:56:09.989088 | orchestrator | Tuesday 02 September 2025 00:53:50 +0000 (0:00:01.052) 0:01:05.346 ***** 2025-09-02 00:56:09.989099 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.989110 | orchestrator | 2025-09-02 00:56:09.989121 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-02 00:56:09.989132 | orchestrator | Tuesday 02 September 2025 00:53:58 +0000 (0:00:07.816) 0:01:13.163 ***** 2025-09-02 00:56:09.989143 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.989154 | orchestrator | 2025-09-02 00:56:09.989165 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-02 00:56:09.989176 | orchestrator | Tuesday 02 September 2025 00:54:00 +0000 (0:00:01.555) 0:01:14.719 ***** 2025-09-02 00:56:09.989187 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.989198 | orchestrator | 2025-09-02 00:56:09.989209 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-02 00:56:09.989220 | orchestrator | Tuesday 02 September 2025 00:54:02 +0000 (0:00:02.640) 0:01:17.360 ***** 2025-09-02 00:56:09.989231 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.989242 | orchestrator | 2025-09-02 00:56:09.989257 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-02 00:56:09.989269 | orchestrator | Tuesday 02 September 2025 00:54:02 +0000 (0:00:00.119) 0:01:17.480 ***** 2025-09-02 00:56:09.989279 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.989290 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.989301 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.989312 | orchestrator | 2025-09-02 00:56:09.989323 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-02 00:56:09.989334 | orchestrator | Tuesday 02 September 2025 00:54:03 +0000 (0:00:00.318) 0:01:17.798 ***** 2025-09-02 00:56:09.989345 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.989356 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-02 00:56:09.989367 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:56:09.989377 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:56:09.989388 | orchestrator | 2025-09-02 00:56:09.989399 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-02 00:56:09.989410 | orchestrator | skipping: no hosts matched 2025-09-02 00:56:09.989421 | orchestrator | 2025-09-02 00:56:09.989432 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-02 00:56:09.989443 | orchestrator | 2025-09-02 00:56:09.989454 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-02 00:56:09.989465 | orchestrator | Tuesday 02 September 2025 00:54:03 +0000 (0:00:00.591) 0:01:18.389 ***** 2025-09-02 00:56:09.989492 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:56:09.989503 | orchestrator | 2025-09-02 00:56:09.989514 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-02 00:56:09.989525 | orchestrator | Tuesday 02 September 2025 00:54:30 +0000 (0:00:26.576) 0:01:44.966 ***** 2025-09-02 00:56:09.989536 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:56:09.989547 | orchestrator | 2025-09-02 00:56:09.989558 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-02 00:56:09.989569 | orchestrator | Tuesday 02 September 2025 00:54:46 +0000 (0:00:16.600) 0:02:01.567 ***** 2025-09-02 00:56:09.989580 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:56:09.989591 | orchestrator | 2025-09-02 00:56:09.989601 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-02 00:56:09.989612 | orchestrator | 2025-09-02 00:56:09.989623 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-02 00:56:09.989634 | orchestrator | Tuesday 02 September 2025 00:54:49 +0000 (0:00:02.396) 0:02:03.963 ***** 2025-09-02 00:56:09.989645 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:56:09.989655 | orchestrator | 2025-09-02 00:56:09.989666 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-02 00:56:09.989677 | orchestrator | Tuesday 02 September 2025 00:55:09 +0000 (0:00:20.071) 0:02:24.034 ***** 2025-09-02 00:56:09.989694 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:56:09.989705 | orchestrator | 2025-09-02 00:56:09.989716 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-02 00:56:09.989727 | orchestrator | Tuesday 02 September 2025 00:55:31 +0000 (0:00:21.596) 0:02:45.631 ***** 2025-09-02 00:56:09.989738 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:56:09.989749 | orchestrator | 2025-09-02 00:56:09.989759 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-02 00:56:09.989770 | orchestrator | 2025-09-02 00:56:09.989787 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-02 00:56:09.989798 | orchestrator | Tuesday 02 September 2025 00:55:33 +0000 (0:00:02.635) 0:02:48.266 ***** 2025-09-02 00:56:09.989809 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.989820 | orchestrator | 2025-09-02 00:56:09.989831 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-02 00:56:09.989842 | orchestrator | Tuesday 02 September 2025 00:55:50 +0000 (0:00:17.159) 0:03:05.425 ***** 2025-09-02 00:56:09.989853 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.989864 | orchestrator | 2025-09-02 00:56:09.989875 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-02 00:56:09.989885 | orchestrator | Tuesday 02 September 2025 00:55:51 +0000 (0:00:00.566) 0:03:05.992 ***** 2025-09-02 00:56:09.989896 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.989907 | orchestrator | 2025-09-02 00:56:09.989918 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-02 00:56:09.989929 | orchestrator | 2025-09-02 00:56:09.989939 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-02 00:56:09.989950 | orchestrator | Tuesday 02 September 2025 00:55:54 +0000 (0:00:02.724) 0:03:08.716 ***** 2025-09-02 00:56:09.989961 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:56:09.989972 | orchestrator | 2025-09-02 00:56:09.989983 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-02 00:56:09.989994 | orchestrator | Tuesday 02 September 2025 00:55:54 +0000 (0:00:00.538) 0:03:09.255 ***** 2025-09-02 00:56:09.990004 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.990066 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.990082 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.990093 | orchestrator | 2025-09-02 00:56:09.990104 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-02 00:56:09.990115 | orchestrator | Tuesday 02 September 2025 00:55:56 +0000 (0:00:02.214) 0:03:11.469 ***** 2025-09-02 00:56:09.990127 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.990138 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.990149 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.990159 | orchestrator | 2025-09-02 00:56:09.990170 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-02 00:56:09.990181 | orchestrator | Tuesday 02 September 2025 00:55:59 +0000 (0:00:02.270) 0:03:13.740 ***** 2025-09-02 00:56:09.990192 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.990203 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.990214 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.990225 | orchestrator | 2025-09-02 00:56:09.990236 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-02 00:56:09.990247 | orchestrator | Tuesday 02 September 2025 00:56:01 +0000 (0:00:02.244) 0:03:15.984 ***** 2025-09-02 00:56:09.990258 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.990274 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.990285 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:56:09.990296 | orchestrator | 2025-09-02 00:56:09.990307 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-02 00:56:09.990318 | orchestrator | Tuesday 02 September 2025 00:56:03 +0000 (0:00:02.155) 0:03:18.140 ***** 2025-09-02 00:56:09.990329 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:56:09.990347 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:56:09.990358 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:56:09.990369 | orchestrator | 2025-09-02 00:56:09.990380 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-02 00:56:09.990391 | orchestrator | Tuesday 02 September 2025 00:56:06 +0000 (0:00:03.114) 0:03:21.254 ***** 2025-09-02 00:56:09.990402 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:56:09.990412 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:56:09.990423 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:56:09.990434 | orchestrator | 2025-09-02 00:56:09.990445 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:56:09.990457 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-02 00:56:09.990468 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-02 00:56:09.990497 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-02 00:56:09.990509 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-02 00:56:09.990520 | orchestrator | 2025-09-02 00:56:09.990531 | orchestrator | 2025-09-02 00:56:09.990542 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:56:09.990553 | orchestrator | Tuesday 02 September 2025 00:56:07 +0000 (0:00:00.452) 0:03:21.706 ***** 2025-09-02 00:56:09.990564 | orchestrator | =============================================================================== 2025-09-02 00:56:09.990575 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 46.65s 2025-09-02 00:56:09.990586 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 38.20s 2025-09-02 00:56:09.990597 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.16s 2025-09-02 00:56:09.990607 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.93s 2025-09-02 00:56:09.990618 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.06s 2025-09-02 00:56:09.990630 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.82s 2025-09-02 00:56:09.990647 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.08s 2025-09-02 00:56:09.990659 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.03s 2025-09-02 00:56:09.990670 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.90s 2025-09-02 00:56:09.990680 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.09s 2025-09-02 00:56:09.990691 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.69s 2025-09-02 00:56:09.990702 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.42s 2025-09-02 00:56:09.990713 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.32s 2025-09-02 00:56:09.990724 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.11s 2025-09-02 00:56:09.990735 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.08s 2025-09-02 00:56:09.990746 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2025-09-02 00:56:09.990756 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.72s 2025-09-02 00:56:09.990767 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.64s 2025-09-02 00:56:09.990778 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.27s 2025-09-02 00:56:09.990789 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.24s 2025-09-02 00:56:09.990806 | orchestrator | 2025-09-02 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:13.034390 | orchestrator | 2025-09-02 00:56:13 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:13.036141 | orchestrator | 2025-09-02 00:56:13 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:13.038009 | orchestrator | 2025-09-02 00:56:13 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:13.038144 | orchestrator | 2025-09-02 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:16.092228 | orchestrator | 2025-09-02 00:56:16 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:16.094432 | orchestrator | 2025-09-02 00:56:16 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:16.095765 | orchestrator | 2025-09-02 00:56:16 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:16.096053 | orchestrator | 2025-09-02 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:19.140597 | orchestrator | 2025-09-02 00:56:19 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:19.142826 | orchestrator | 2025-09-02 00:56:19 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:19.144968 | orchestrator | 2025-09-02 00:56:19 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:19.145081 | orchestrator | 2025-09-02 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:22.186819 | orchestrator | 2025-09-02 00:56:22 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:22.187207 | orchestrator | 2025-09-02 00:56:22 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:22.191160 | orchestrator | 2025-09-02 00:56:22 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:22.191238 | orchestrator | 2025-09-02 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:25.236155 | orchestrator | 2025-09-02 00:56:25 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:25.237942 | orchestrator | 2025-09-02 00:56:25 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:25.240270 | orchestrator | 2025-09-02 00:56:25 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:25.240310 | orchestrator | 2025-09-02 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:28.283943 | orchestrator | 2025-09-02 00:56:28 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:28.284059 | orchestrator | 2025-09-02 00:56:28 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:28.286161 | orchestrator | 2025-09-02 00:56:28 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:28.286201 | orchestrator | 2025-09-02 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:31.323482 | orchestrator | 2025-09-02 00:56:31 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:31.324658 | orchestrator | 2025-09-02 00:56:31 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:31.327245 | orchestrator | 2025-09-02 00:56:31 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:31.327466 | orchestrator | 2025-09-02 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:34.378545 | orchestrator | 2025-09-02 00:56:34 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:34.379345 | orchestrator | 2025-09-02 00:56:34 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:34.381606 | orchestrator | 2025-09-02 00:56:34 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:34.382095 | orchestrator | 2025-09-02 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:37.417986 | orchestrator | 2025-09-02 00:56:37 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:37.418235 | orchestrator | 2025-09-02 00:56:37 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:37.419157 | orchestrator | 2025-09-02 00:56:37 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:37.419182 | orchestrator | 2025-09-02 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:40.475753 | orchestrator | 2025-09-02 00:56:40 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:40.478182 | orchestrator | 2025-09-02 00:56:40 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:40.479080 | orchestrator | 2025-09-02 00:56:40 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:40.479107 | orchestrator | 2025-09-02 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:43.516816 | orchestrator | 2025-09-02 00:56:43 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:43.517915 | orchestrator | 2025-09-02 00:56:43 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:43.518433 | orchestrator | 2025-09-02 00:56:43 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:43.518626 | orchestrator | 2025-09-02 00:56:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:46.567803 | orchestrator | 2025-09-02 00:56:46 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:46.568873 | orchestrator | 2025-09-02 00:56:46 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:46.571077 | orchestrator | 2025-09-02 00:56:46 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:46.571117 | orchestrator | 2025-09-02 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:49.624870 | orchestrator | 2025-09-02 00:56:49 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:49.627751 | orchestrator | 2025-09-02 00:56:49 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:49.629772 | orchestrator | 2025-09-02 00:56:49 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:49.629879 | orchestrator | 2025-09-02 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:52.669675 | orchestrator | 2025-09-02 00:56:52 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:52.670684 | orchestrator | 2025-09-02 00:56:52 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:52.671905 | orchestrator | 2025-09-02 00:56:52 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:52.671928 | orchestrator | 2025-09-02 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:55.720248 | orchestrator | 2025-09-02 00:56:55 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:55.722360 | orchestrator | 2025-09-02 00:56:55 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:55.724117 | orchestrator | 2025-09-02 00:56:55 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:55.724146 | orchestrator | 2025-09-02 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:56:58.772796 | orchestrator | 2025-09-02 00:56:58 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:56:58.774135 | orchestrator | 2025-09-02 00:56:58 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:56:58.775741 | orchestrator | 2025-09-02 00:56:58 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:56:58.775770 | orchestrator | 2025-09-02 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:01.813292 | orchestrator | 2025-09-02 00:57:01 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:57:01.817256 | orchestrator | 2025-09-02 00:57:01 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:01.819327 | orchestrator | 2025-09-02 00:57:01 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:01.819351 | orchestrator | 2025-09-02 00:57:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:04.863131 | orchestrator | 2025-09-02 00:57:04 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:57:04.865143 | orchestrator | 2025-09-02 00:57:04 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:04.866971 | orchestrator | 2025-09-02 00:57:04 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:04.867004 | orchestrator | 2025-09-02 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:07.912458 | orchestrator | 2025-09-02 00:57:07 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:57:07.915131 | orchestrator | 2025-09-02 00:57:07 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:07.918066 | orchestrator | 2025-09-02 00:57:07 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:07.918149 | orchestrator | 2025-09-02 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:10.967617 | orchestrator | 2025-09-02 00:57:10 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:57:10.969364 | orchestrator | 2025-09-02 00:57:10 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:10.971996 | orchestrator | 2025-09-02 00:57:10 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:10.972029 | orchestrator | 2025-09-02 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:14.018337 | orchestrator | 2025-09-02 00:57:14 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state STARTED 2025-09-02 00:57:14.022945 | orchestrator | 2025-09-02 00:57:14 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:14.025038 | orchestrator | 2025-09-02 00:57:14 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:14.025140 | orchestrator | 2025-09-02 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:17.072093 | orchestrator | 2025-09-02 00:57:17 | INFO  | Task 9f910f87-16d1-439c-a7d2-bb773f5b1c9d is in state SUCCESS 2025-09-02 00:57:17.074213 | orchestrator | 2025-09-02 00:57:17.074254 | orchestrator | 2025-09-02 00:57:17.074268 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-02 00:57:17.074281 | orchestrator | 2025-09-02 00:57:17.074293 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-02 00:57:17.074305 | orchestrator | Tuesday 02 September 2025 00:55:09 +0000 (0:00:00.624) 0:00:00.624 ***** 2025-09-02 00:57:17.074317 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:57:17.074329 | orchestrator | 2025-09-02 00:57:17.074341 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-02 00:57:17.074353 | orchestrator | Tuesday 02 September 2025 00:55:10 +0000 (0:00:00.630) 0:00:01.255 ***** 2025-09-02 00:57:17.074365 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.074377 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.074389 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.074401 | orchestrator | 2025-09-02 00:57:17.074413 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-02 00:57:17.074424 | orchestrator | Tuesday 02 September 2025 00:55:10 +0000 (0:00:00.638) 0:00:01.893 ***** 2025-09-02 00:57:17.074436 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.074584 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.074666 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.074677 | orchestrator | 2025-09-02 00:57:17.074688 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-02 00:57:17.074781 | orchestrator | Tuesday 02 September 2025 00:55:11 +0000 (0:00:00.298) 0:00:02.192 ***** 2025-09-02 00:57:17.075162 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.075178 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.075191 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.075202 | orchestrator | 2025-09-02 00:57:17.075278 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-02 00:57:17.075292 | orchestrator | Tuesday 02 September 2025 00:55:11 +0000 (0:00:00.807) 0:00:03.000 ***** 2025-09-02 00:57:17.075355 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.075368 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.075379 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.075390 | orchestrator | 2025-09-02 00:57:17.075401 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-02 00:57:17.075412 | orchestrator | Tuesday 02 September 2025 00:55:12 +0000 (0:00:00.331) 0:00:03.332 ***** 2025-09-02 00:57:17.075423 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.075433 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.075444 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.075455 | orchestrator | 2025-09-02 00:57:17.075466 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-02 00:57:17.075477 | orchestrator | Tuesday 02 September 2025 00:55:12 +0000 (0:00:00.316) 0:00:03.649 ***** 2025-09-02 00:57:17.075488 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.075499 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.075510 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.075540 | orchestrator | 2025-09-02 00:57:17.075817 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-02 00:57:17.075835 | orchestrator | Tuesday 02 September 2025 00:55:12 +0000 (0:00:00.335) 0:00:03.984 ***** 2025-09-02 00:57:17.075847 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.075859 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.075870 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.075882 | orchestrator | 2025-09-02 00:57:17.075893 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-02 00:57:17.075903 | orchestrator | Tuesday 02 September 2025 00:55:13 +0000 (0:00:00.496) 0:00:04.480 ***** 2025-09-02 00:57:17.075914 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.075925 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.075937 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.075948 | orchestrator | 2025-09-02 00:57:17.075975 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-02 00:57:17.075986 | orchestrator | Tuesday 02 September 2025 00:55:13 +0000 (0:00:00.313) 0:00:04.794 ***** 2025-09-02 00:57:17.075997 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-02 00:57:17.076008 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:57:17.076019 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:57:17.076030 | orchestrator | 2025-09-02 00:57:17.076041 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-02 00:57:17.076052 | orchestrator | Tuesday 02 September 2025 00:55:14 +0000 (0:00:00.649) 0:00:05.443 ***** 2025-09-02 00:57:17.076063 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.076074 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.076103 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.076115 | orchestrator | 2025-09-02 00:57:17.076126 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-02 00:57:17.076137 | orchestrator | Tuesday 02 September 2025 00:55:14 +0000 (0:00:00.416) 0:00:05.860 ***** 2025-09-02 00:57:17.076147 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-02 00:57:17.076170 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:57:17.076181 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:57:17.076192 | orchestrator | 2025-09-02 00:57:17.076203 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-02 00:57:17.076266 | orchestrator | Tuesday 02 September 2025 00:55:16 +0000 (0:00:02.086) 0:00:07.947 ***** 2025-09-02 00:57:17.076279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-02 00:57:17.076290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-02 00:57:17.076301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-02 00:57:17.076312 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.076323 | orchestrator | 2025-09-02 00:57:17.076334 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-02 00:57:17.076382 | orchestrator | Tuesday 02 September 2025 00:55:17 +0000 (0:00:00.407) 0:00:08.354 ***** 2025-09-02 00:57:17.076397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.076411 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.076422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.076434 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.076444 | orchestrator | 2025-09-02 00:57:17.076455 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-02 00:57:17.076466 | orchestrator | Tuesday 02 September 2025 00:55:17 +0000 (0:00:00.804) 0:00:09.159 ***** 2025-09-02 00:57:17.076479 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.076492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.076513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.076557 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.076569 | orchestrator | 2025-09-02 00:57:17.076580 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-02 00:57:17.076590 | orchestrator | Tuesday 02 September 2025 00:55:18 +0000 (0:00:00.166) 0:00:09.326 ***** 2025-09-02 00:57:17.076679 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e1e2a694bab2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-02 00:55:15.313663', 'end': '2025-09-02 00:55:15.362456', 'delta': '0:00:00.048793', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e1e2a694bab2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-02 00:57:17.076702 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '17a4eac8e76d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-02 00:55:16.099956', 'end': '2025-09-02 00:55:16.136566', 'delta': '0:00:00.036610', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['17a4eac8e76d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-02 00:57:17.076749 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '704f336a1a98', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-02 00:55:16.580560', 'end': '2025-09-02 00:55:16.620772', 'delta': '0:00:00.040212', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['704f336a1a98'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-02 00:57:17.076763 | orchestrator | 2025-09-02 00:57:17.076774 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-02 00:57:17.076785 | orchestrator | Tuesday 02 September 2025 00:55:18 +0000 (0:00:00.369) 0:00:09.695 ***** 2025-09-02 00:57:17.076796 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.076806 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.076817 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.076828 | orchestrator | 2025-09-02 00:57:17.076839 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-02 00:57:17.076859 | orchestrator | Tuesday 02 September 2025 00:55:18 +0000 (0:00:00.449) 0:00:10.145 ***** 2025-09-02 00:57:17.076870 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-02 00:57:17.076881 | orchestrator | 2025-09-02 00:57:17.076892 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-02 00:57:17.076903 | orchestrator | Tuesday 02 September 2025 00:55:20 +0000 (0:00:01.634) 0:00:11.779 ***** 2025-09-02 00:57:17.076914 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.076925 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.076936 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.076947 | orchestrator | 2025-09-02 00:57:17.076957 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-02 00:57:17.076968 | orchestrator | Tuesday 02 September 2025 00:55:20 +0000 (0:00:00.290) 0:00:12.070 ***** 2025-09-02 00:57:17.076979 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.076990 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.077001 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.077012 | orchestrator | 2025-09-02 00:57:17.077022 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-02 00:57:17.077033 | orchestrator | Tuesday 02 September 2025 00:55:21 +0000 (0:00:00.403) 0:00:12.474 ***** 2025-09-02 00:57:17.077044 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.077055 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.077066 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.077077 | orchestrator | 2025-09-02 00:57:17.077088 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-02 00:57:17.077098 | orchestrator | Tuesday 02 September 2025 00:55:21 +0000 (0:00:00.477) 0:00:12.951 ***** 2025-09-02 00:57:17.077109 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.077120 | orchestrator | 2025-09-02 00:57:17.077131 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-02 00:57:17.077142 | orchestrator | Tuesday 02 September 2025 00:55:21 +0000 (0:00:00.125) 0:00:13.076 ***** 2025-09-02 00:57:17.077153 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.077164 | orchestrator | 2025-09-02 00:57:17.077175 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-02 00:57:17.077186 | orchestrator | Tuesday 02 September 2025 00:55:22 +0000 (0:00:00.221) 0:00:13.298 ***** 2025-09-02 00:57:17.077196 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.077207 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.077218 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.077229 | orchestrator | 2025-09-02 00:57:17.077240 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-02 00:57:17.077251 | orchestrator | Tuesday 02 September 2025 00:55:22 +0000 (0:00:00.321) 0:00:13.620 ***** 2025-09-02 00:57:17.077262 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.077272 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.077283 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.077294 | orchestrator | 2025-09-02 00:57:17.077305 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-02 00:57:17.077318 | orchestrator | Tuesday 02 September 2025 00:55:22 +0000 (0:00:00.313) 0:00:13.933 ***** 2025-09-02 00:57:17.077332 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.077344 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.077358 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.077370 | orchestrator | 2025-09-02 00:57:17.077383 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-02 00:57:17.077395 | orchestrator | Tuesday 02 September 2025 00:55:23 +0000 (0:00:00.494) 0:00:14.427 ***** 2025-09-02 00:57:17.077408 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.077420 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.077438 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.077451 | orchestrator | 2025-09-02 00:57:17.077464 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-02 00:57:17.077484 | orchestrator | Tuesday 02 September 2025 00:55:23 +0000 (0:00:00.342) 0:00:14.770 ***** 2025-09-02 00:57:17.077497 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.077509 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.077570 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.077584 | orchestrator | 2025-09-02 00:57:17.077597 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-02 00:57:17.077610 | orchestrator | Tuesday 02 September 2025 00:55:23 +0000 (0:00:00.318) 0:00:15.088 ***** 2025-09-02 00:57:17.077623 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.077636 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.077648 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.077662 | orchestrator | 2025-09-02 00:57:17.077673 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-02 00:57:17.077718 | orchestrator | Tuesday 02 September 2025 00:55:24 +0000 (0:00:00.343) 0:00:15.431 ***** 2025-09-02 00:57:17.077731 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.077742 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.077753 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.077764 | orchestrator | 2025-09-02 00:57:17.077774 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-02 00:57:17.077785 | orchestrator | Tuesday 02 September 2025 00:55:24 +0000 (0:00:00.489) 0:00:15.921 ***** 2025-09-02 00:57:17.077797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c-osd--block--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c', 'dm-uuid-LVM-T6Z3P3nBZVBO8YdzD4wDcT6X0PQUZyHMzCQsTBTPAt7wdpbwDpTgKjlwaJnHX89S'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--688b3bb6--a638--5f84--8470--ce7969c766cd-osd--block--688b3bb6--a638--5f84--8470--ce7969c766cd', 'dm-uuid-LVM-5DFDHHLaMcqlr42LtK9y1ks0goXeiOLsVdQ3XQwJkrJPN1jGtt7yT9M7NmtcNE4W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.077942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.077964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de858a7c--8c7c--5154--a7df--793b28d7d942-osd--block--de858a7c--8c7c--5154--a7df--793b28d7d942', 'dm-uuid-LVM-ma6ZNkFTI2pW677Dtsi99WvqlH4kOSHkgt2lGF6fu40s8l6PK4gUAx6Lp102tY7q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c-osd--block--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-52ec4o-w5kU-dUIA-7pTt-Ivor-269A-qymOia', 'scsi-0QEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43', 'scsi-SQEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4843a7b7--fb51--5101--86f0--3e9039878e37-osd--block--4843a7b7--fb51--5101--86f0--3e9039878e37', 'dm-uuid-LVM-kwOU19wIHVYOI5Hf2Y4Yz3ryuAguNEcFccQ4JqaRPimMD4XfTDS8Iz6qATnMeiTA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--688b3bb6--a638--5f84--8470--ce7969c766cd-osd--block--688b3bb6--a638--5f84--8470--ce7969c766cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pHjcBF-aLQ2-arb5-pD4s-mfWi-GfZC-LbvRyv', 'scsi-0QEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3', 'scsi-SQEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498', 'scsi-SQEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part1', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part14', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part15', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part16', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de858a7c--8c7c--5154--a7df--793b28d7d942-osd--block--de858a7c--8c7c--5154--a7df--793b28d7d942'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ETcDf6-YET1-mUgR-WJcn-lq56-yxxu-9IOrbI', 'scsi-0QEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd', 'scsi-SQEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4843a7b7--fb51--5101--86f0--3e9039878e37-osd--block--4843a7b7--fb51--5101--86f0--3e9039878e37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-f5ScAO-8MlN-r0n9-EgSW-3S8i-n1aV-MdwxRw', 'scsi-0QEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a', 'scsi-SQEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e', 'scsi-SQEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078300 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.078310 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.078323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ad19e49--f824--57b0--a164--7b3912efd317-osd--block--7ad19e49--f824--57b0--a164--7b3912efd317', 'dm-uuid-LVM-bvxsWt8LXX4MIwOUIceR1g502rbBdH0idmo7Hbn6tK08s02n2USNM6FAhFO2GmKO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14a05dcf--7776--5f2b--8543--65494bada47a-osd--block--14a05dcf--7776--5f2b--8543--65494bada47a', 'dm-uuid-LVM-7MaOfZrc4vC7t91s5rBv8cpEUSYWM9fFMtliVA2Gi7uzqZNfPQaDewuDOFuHo2GF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-02 00:57:17.078453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ad19e49--f824--57b0--a164--7b3912efd317-osd--block--7ad19e49--f824--57b0--a164--7b3912efd317'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uMgDuu-QQi3-CkEu-JTS0-eViq-CoBT-fXK4Qm', 'scsi-0QEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb', 'scsi-SQEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--14a05dcf--7776--5f2b--8543--65494bada47a-osd--block--14a05dcf--7776--5f2b--8543--65494bada47a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZPNxph-hsQl-foE8-Dl2I-UKJd-HrnJ-QvnxGG', 'scsi-0QEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6', 'scsi-SQEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70', 'scsi-SQEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-02 00:57:17.078535 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.078546 | orchestrator | 2025-09-02 00:57:17.078556 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-02 00:57:17.078566 | orchestrator | Tuesday 02 September 2025 00:55:25 +0000 (0:00:00.658) 0:00:16.580 ***** 2025-09-02 00:57:17.078577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c-osd--block--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c', 'dm-uuid-LVM-T6Z3P3nBZVBO8YdzD4wDcT6X0PQUZyHMzCQsTBTPAt7wdpbwDpTgKjlwaJnHX89S'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078588 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--688b3bb6--a638--5f84--8470--ce7969c766cd-osd--block--688b3bb6--a638--5f84--8470--ce7969c766cd', 'dm-uuid-LVM-5DFDHHLaMcqlr42LtK9y1ks0goXeiOLsVdQ3XQwJkrJPN1jGtt7yT9M7NmtcNE4W'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078645 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de858a7c--8c7c--5154--a7df--793b28d7d942-osd--block--de858a7c--8c7c--5154--a7df--793b28d7d942', 'dm-uuid-LVM-ma6ZNkFTI2pW677Dtsi99WvqlH4kOSHkgt2lGF6fu40s8l6PK4gUAx6Lp102tY7q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078706 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4843a7b7--fb51--5101--86f0--3e9039878e37-osd--block--4843a7b7--fb51--5101--86f0--3e9039878e37', 'dm-uuid-LVM-kwOU19wIHVYOI5Hf2Y4Yz3ryuAguNEcFccQ4JqaRPimMD4XfTDS8Iz6qATnMeiTA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078722 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ebaf2104-8d32-4707-a68f-9d7668415e6b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078764 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c-osd--block--13b5fa21--9dd3--5f23--9982--99f7e2a8b07c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-52ec4o-w5kU-dUIA-7pTt-Ivor-269A-qymOia', 'scsi-0QEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43', 'scsi-SQEMU_QEMU_HARDDISK_73f7aaa7-092a-4c1a-a663-fd98a6f92d43'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078812 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--688b3bb6--a638--5f84--8470--ce7969c766cd-osd--block--688b3bb6--a638--5f84--8470--ce7969c766cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pHjcBF-aLQ2-arb5-pD4s-mfWi-GfZC-LbvRyv', 'scsi-0QEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3', 'scsi-SQEMU_QEMU_HARDDISK_533befbb-84ad-4d2f-a6fe-9bcc757d70d3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078822 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498', 'scsi-SQEMU_QEMU_HARDDISK_422500f3-63b7-48d3-a02b-7c8a68fd4498'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078898 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078919 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part1', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part14', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part15', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part16', 'scsi-SQEMU_QEMU_HARDDISK_5cc69dd7-f132-4ade-913f-1aa60f8d1fc7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078937 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.078948 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de858a7c--8c7c--5154--a7df--793b28d7d942-osd--block--de858a7c--8c7c--5154--a7df--793b28d7d942'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ETcDf6-YET1-mUgR-WJcn-lq56-yxxu-9IOrbI', 'scsi-0QEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd', 'scsi-SQEMU_QEMU_HARDDISK_ab73bc68-b021-49f6-bbbb-bb60dd18c0cd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4843a7b7--fb51--5101--86f0--3e9039878e37-osd--block--4843a7b7--fb51--5101--86f0--3e9039878e37'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-f5ScAO-8MlN-r0n9-EgSW-3S8i-n1aV-MdwxRw', 'scsi-0QEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a', 'scsi-SQEMU_QEMU_HARDDISK_0c16e54e-6892-4c41-822b-0a71b602051a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ad19e49--f824--57b0--a164--7b3912efd317-osd--block--7ad19e49--f824--57b0--a164--7b3912efd317', 'dm-uuid-LVM-bvxsWt8LXX4MIwOUIceR1g502rbBdH0idmo7Hbn6tK08s02n2USNM6FAhFO2GmKO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.078990 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e', 'scsi-SQEMU_QEMU_HARDDISK_5a98751c-9a0d-464e-b805-2bbf5e836a0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--14a05dcf--7776--5f2b--8543--65494bada47a-osd--block--14a05dcf--7776--5f2b--8543--65494bada47a', 'dm-uuid-LVM-7MaOfZrc4vC7t91s5rBv8cpEUSYWM9fFMtliVA2Gi7uzqZNfPQaDewuDOFuHo2GF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079016 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-01-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079027 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.079037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079047 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079062 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079077 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079102 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079112 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079144 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_72be9134-82c8-4fbd-a40e-19493d1fd0d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079161 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ad19e49--f824--57b0--a164--7b3912efd317-osd--block--7ad19e49--f824--57b0--a164--7b3912efd317'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uMgDuu-QQi3-CkEu-JTS0-eViq-CoBT-fXK4Qm', 'scsi-0QEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb', 'scsi-SQEMU_QEMU_HARDDISK_8c8a1bad-d3c8-4ce8-afe3-e6b6dedd17eb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079172 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--14a05dcf--7776--5f2b--8543--65494bada47a-osd--block--14a05dcf--7776--5f2b--8543--65494bada47a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZPNxph-hsQl-foE8-Dl2I-UKJd-HrnJ-QvnxGG', 'scsi-0QEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6', 'scsi-SQEMU_QEMU_HARDDISK_4851d1ac-b90a-4b34-9adb-d79585c21de6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079186 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70', 'scsi-SQEMU_QEMU_HARDDISK_6860efa8-e6c6-43d7-8842-eeafa8a27f70'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079201 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-02-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-02 00:57:17.079217 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.079227 | orchestrator | 2025-09-02 00:57:17.079237 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-02 00:57:17.079247 | orchestrator | Tuesday 02 September 2025 00:55:25 +0000 (0:00:00.581) 0:00:17.161 ***** 2025-09-02 00:57:17.079257 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.079267 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.079277 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.079287 | orchestrator | 2025-09-02 00:57:17.079297 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-02 00:57:17.079306 | orchestrator | Tuesday 02 September 2025 00:55:26 +0000 (0:00:00.711) 0:00:17.873 ***** 2025-09-02 00:57:17.079316 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.079326 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.079336 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.079345 | orchestrator | 2025-09-02 00:57:17.079355 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-02 00:57:17.079365 | orchestrator | Tuesday 02 September 2025 00:55:27 +0000 (0:00:00.483) 0:00:18.357 ***** 2025-09-02 00:57:17.079374 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.079384 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.079394 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.079404 | orchestrator | 2025-09-02 00:57:17.079413 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-02 00:57:17.079423 | orchestrator | Tuesday 02 September 2025 00:55:27 +0000 (0:00:00.687) 0:00:19.044 ***** 2025-09-02 00:57:17.079433 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.079443 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.079453 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.079462 | orchestrator | 2025-09-02 00:57:17.079472 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-02 00:57:17.079482 | orchestrator | Tuesday 02 September 2025 00:55:28 +0000 (0:00:00.329) 0:00:19.374 ***** 2025-09-02 00:57:17.079492 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.079502 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.079512 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.079562 | orchestrator | 2025-09-02 00:57:17.079573 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-02 00:57:17.079583 | orchestrator | Tuesday 02 September 2025 00:55:28 +0000 (0:00:00.393) 0:00:19.767 ***** 2025-09-02 00:57:17.079593 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.079603 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.079613 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.079623 | orchestrator | 2025-09-02 00:57:17.079632 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-02 00:57:17.079642 | orchestrator | Tuesday 02 September 2025 00:55:29 +0000 (0:00:00.548) 0:00:20.315 ***** 2025-09-02 00:57:17.079652 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-02 00:57:17.079661 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-02 00:57:17.079671 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-02 00:57:17.079681 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-02 00:57:17.079690 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-02 00:57:17.079700 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-02 00:57:17.079710 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-02 00:57:17.079720 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-02 00:57:17.079729 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-02 00:57:17.079739 | orchestrator | 2025-09-02 00:57:17.079749 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-02 00:57:17.079764 | orchestrator | Tuesday 02 September 2025 00:55:29 +0000 (0:00:00.865) 0:00:21.181 ***** 2025-09-02 00:57:17.079774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-02 00:57:17.079784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-02 00:57:17.079793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-02 00:57:17.079803 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.079813 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-02 00:57:17.079823 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-02 00:57:17.079832 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-02 00:57:17.079842 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.079852 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-02 00:57:17.079861 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-02 00:57:17.079875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-02 00:57:17.079885 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.079895 | orchestrator | 2025-09-02 00:57:17.079905 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-02 00:57:17.079915 | orchestrator | Tuesday 02 September 2025 00:55:30 +0000 (0:00:00.391) 0:00:21.572 ***** 2025-09-02 00:57:17.079925 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 00:57:17.079934 | orchestrator | 2025-09-02 00:57:17.079945 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-02 00:57:17.079955 | orchestrator | Tuesday 02 September 2025 00:55:31 +0000 (0:00:00.735) 0:00:22.308 ***** 2025-09-02 00:57:17.079965 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.079975 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.079983 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.079991 | orchestrator | 2025-09-02 00:57:17.080003 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-02 00:57:17.080012 | orchestrator | Tuesday 02 September 2025 00:55:31 +0000 (0:00:00.322) 0:00:22.630 ***** 2025-09-02 00:57:17.080020 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.080027 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.080035 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.080043 | orchestrator | 2025-09-02 00:57:17.080051 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-02 00:57:17.080060 | orchestrator | Tuesday 02 September 2025 00:55:31 +0000 (0:00:00.332) 0:00:22.963 ***** 2025-09-02 00:57:17.080068 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.080075 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.080083 | orchestrator | skipping: [testbed-node-5] 2025-09-02 00:57:17.080091 | orchestrator | 2025-09-02 00:57:17.080099 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-02 00:57:17.080107 | orchestrator | Tuesday 02 September 2025 00:55:32 +0000 (0:00:00.376) 0:00:23.339 ***** 2025-09-02 00:57:17.080115 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.080123 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.080131 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.080139 | orchestrator | 2025-09-02 00:57:17.080147 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-02 00:57:17.080155 | orchestrator | Tuesday 02 September 2025 00:55:32 +0000 (0:00:00.674) 0:00:24.014 ***** 2025-09-02 00:57:17.080163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:57:17.080170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:57:17.080178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:57:17.080186 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.080194 | orchestrator | 2025-09-02 00:57:17.080202 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-02 00:57:17.080216 | orchestrator | Tuesday 02 September 2025 00:55:33 +0000 (0:00:00.401) 0:00:24.416 ***** 2025-09-02 00:57:17.080225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:57:17.080232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:57:17.080240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:57:17.080248 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.080256 | orchestrator | 2025-09-02 00:57:17.080264 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-02 00:57:17.080272 | orchestrator | Tuesday 02 September 2025 00:55:33 +0000 (0:00:00.380) 0:00:24.796 ***** 2025-09-02 00:57:17.080280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-02 00:57:17.080288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-02 00:57:17.080296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-02 00:57:17.080304 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.080312 | orchestrator | 2025-09-02 00:57:17.080320 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-02 00:57:17.080328 | orchestrator | Tuesday 02 September 2025 00:55:34 +0000 (0:00:00.397) 0:00:25.194 ***** 2025-09-02 00:57:17.080337 | orchestrator | ok: [testbed-node-3] 2025-09-02 00:57:17.080345 | orchestrator | ok: [testbed-node-4] 2025-09-02 00:57:17.080353 | orchestrator | ok: [testbed-node-5] 2025-09-02 00:57:17.080361 | orchestrator | 2025-09-02 00:57:17.080369 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-02 00:57:17.080377 | orchestrator | Tuesday 02 September 2025 00:55:34 +0000 (0:00:00.332) 0:00:25.526 ***** 2025-09-02 00:57:17.080385 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-02 00:57:17.080393 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-02 00:57:17.080401 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-02 00:57:17.080409 | orchestrator | 2025-09-02 00:57:17.080417 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-02 00:57:17.080425 | orchestrator | Tuesday 02 September 2025 00:55:34 +0000 (0:00:00.510) 0:00:26.037 ***** 2025-09-02 00:57:17.080433 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-02 00:57:17.080441 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:57:17.080449 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:57:17.080457 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-02 00:57:17.080465 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-02 00:57:17.080473 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-02 00:57:17.080481 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-02 00:57:17.080489 | orchestrator | 2025-09-02 00:57:17.080497 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-02 00:57:17.080508 | orchestrator | Tuesday 02 September 2025 00:55:35 +0000 (0:00:01.068) 0:00:27.106 ***** 2025-09-02 00:57:17.080517 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-02 00:57:17.080536 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-02 00:57:17.080544 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-02 00:57:17.080552 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-02 00:57:17.080560 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-02 00:57:17.080568 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-02 00:57:17.080576 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-02 00:57:17.080611 | orchestrator | 2025-09-02 00:57:17.080623 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-02 00:57:17.080632 | orchestrator | Tuesday 02 September 2025 00:55:38 +0000 (0:00:02.094) 0:00:29.200 ***** 2025-09-02 00:57:17.080639 | orchestrator | skipping: [testbed-node-3] 2025-09-02 00:57:17.080647 | orchestrator | skipping: [testbed-node-4] 2025-09-02 00:57:17.080655 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-02 00:57:17.080663 | orchestrator | 2025-09-02 00:57:17.080671 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-02 00:57:17.080679 | orchestrator | Tuesday 02 September 2025 00:55:38 +0000 (0:00:00.382) 0:00:29.583 ***** 2025-09-02 00:57:17.080688 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-02 00:57:17.080697 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-02 00:57:17.080706 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-02 00:57:17.080714 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-02 00:57:17.080722 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-02 00:57:17.080730 | orchestrator | 2025-09-02 00:57:17.080738 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-02 00:57:17.080747 | orchestrator | Tuesday 02 September 2025 00:56:21 +0000 (0:00:43.508) 0:01:13.092 ***** 2025-09-02 00:57:17.080754 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080771 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080778 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080786 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080794 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080802 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-02 00:57:17.080810 | orchestrator | 2025-09-02 00:57:17.080818 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-02 00:57:17.080826 | orchestrator | Tuesday 02 September 2025 00:56:45 +0000 (0:00:23.677) 0:01:36.770 ***** 2025-09-02 00:57:17.080834 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080842 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080850 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080858 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080871 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080879 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080887 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-02 00:57:17.080895 | orchestrator | 2025-09-02 00:57:17.080907 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-02 00:57:17.080915 | orchestrator | Tuesday 02 September 2025 00:56:58 +0000 (0:00:12.454) 0:01:49.224 ***** 2025-09-02 00:57:17.080923 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080931 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-02 00:57:17.080939 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-02 00:57:17.080947 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080955 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-02 00:57:17.080963 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-02 00:57:17.080974 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.080983 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-02 00:57:17.080991 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-02 00:57:17.080999 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.081007 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-02 00:57:17.081015 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-02 00:57:17.081023 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.081031 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-02 00:57:17.081039 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-02 00:57:17.081047 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-02 00:57:17.081055 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-02 00:57:17.081063 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-02 00:57:17.081071 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-02 00:57:17.081079 | orchestrator | 2025-09-02 00:57:17.081087 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:57:17.081095 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-02 00:57:17.081104 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-02 00:57:17.081112 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-02 00:57:17.081120 | orchestrator | 2025-09-02 00:57:17.081128 | orchestrator | 2025-09-02 00:57:17.081136 | orchestrator | 2025-09-02 00:57:17.081144 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:57:17.081152 | orchestrator | Tuesday 02 September 2025 00:57:16 +0000 (0:00:18.489) 0:02:07.714 ***** 2025-09-02 00:57:17.081160 | orchestrator | =============================================================================== 2025-09-02 00:57:17.081168 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.51s 2025-09-02 00:57:17.081176 | orchestrator | generate keys ---------------------------------------------------------- 23.68s 2025-09-02 00:57:17.081184 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.49s 2025-09-02 00:57:17.081197 | orchestrator | get keys from monitors ------------------------------------------------- 12.45s 2025-09-02 00:57:17.081205 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.09s 2025-09-02 00:57:17.081213 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.09s 2025-09-02 00:57:17.081221 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.63s 2025-09-02 00:57:17.081229 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2025-09-02 00:57:17.081237 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2025-09-02 00:57:17.081245 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2025-09-02 00:57:17.081253 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2025-09-02 00:57:17.081261 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.74s 2025-09-02 00:57:17.081269 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2025-09-02 00:57:17.081277 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2025-09-02 00:57:17.081285 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.67s 2025-09-02 00:57:17.081293 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.66s 2025-09-02 00:57:17.081301 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2025-09-02 00:57:17.081309 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2025-09-02 00:57:17.081317 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2025-09-02 00:57:17.081328 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-09-02 00:57:17.081336 | orchestrator | 2025-09-02 00:57:17 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:17.081344 | orchestrator | 2025-09-02 00:57:17 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:17.081352 | orchestrator | 2025-09-02 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:20.131817 | orchestrator | 2025-09-02 00:57:20 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state STARTED 2025-09-02 00:57:20.133815 | orchestrator | 2025-09-02 00:57:20 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:20.135854 | orchestrator | 2025-09-02 00:57:20 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:20.136274 | orchestrator | 2025-09-02 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:23.177613 | orchestrator | 2025-09-02 00:57:23 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state STARTED 2025-09-02 00:57:23.178848 | orchestrator | 2025-09-02 00:57:23 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:23.179837 | orchestrator | 2025-09-02 00:57:23 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:23.179861 | orchestrator | 2025-09-02 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:26.230098 | orchestrator | 2025-09-02 00:57:26 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state STARTED 2025-09-02 00:57:26.231777 | orchestrator | 2025-09-02 00:57:26 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:26.234509 | orchestrator | 2025-09-02 00:57:26 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:26.234807 | orchestrator | 2025-09-02 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:29.279147 | orchestrator | 2025-09-02 00:57:29 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state STARTED 2025-09-02 00:57:29.281713 | orchestrator | 2025-09-02 00:57:29 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:29.282391 | orchestrator | 2025-09-02 00:57:29 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:29.282418 | orchestrator | 2025-09-02 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:32.332626 | orchestrator | 2025-09-02 00:57:32 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state STARTED 2025-09-02 00:57:32.335011 | orchestrator | 2025-09-02 00:57:32 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:32.336995 | orchestrator | 2025-09-02 00:57:32 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:32.337029 | orchestrator | 2025-09-02 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:35.397818 | orchestrator | 2025-09-02 00:57:35 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state STARTED 2025-09-02 00:57:35.400402 | orchestrator | 2025-09-02 00:57:35 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:35.403696 | orchestrator | 2025-09-02 00:57:35 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:35.403769 | orchestrator | 2025-09-02 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:38.457812 | orchestrator | 2025-09-02 00:57:38 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state STARTED 2025-09-02 00:57:38.459600 | orchestrator | 2025-09-02 00:57:38 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:38.461499 | orchestrator | 2025-09-02 00:57:38 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:38.462071 | orchestrator | 2025-09-02 00:57:38 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:41.513199 | orchestrator | 2025-09-02 00:57:41 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state STARTED 2025-09-02 00:57:41.514701 | orchestrator | 2025-09-02 00:57:41 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:41.517747 | orchestrator | 2025-09-02 00:57:41 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:41.517954 | orchestrator | 2025-09-02 00:57:41 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:44.555890 | orchestrator | 2025-09-02 00:57:44 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state STARTED 2025-09-02 00:57:44.556784 | orchestrator | 2025-09-02 00:57:44 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:44.558239 | orchestrator | 2025-09-02 00:57:44 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:44.558339 | orchestrator | 2025-09-02 00:57:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:47.595094 | orchestrator | 2025-09-02 00:57:47 | INFO  | Task dcdfe18d-c9a8-4d99-a3d4-27685e7056c8 is in state SUCCESS 2025-09-02 00:57:47.597393 | orchestrator | 2025-09-02 00:57:47 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:47.599457 | orchestrator | 2025-09-02 00:57:47 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:47.599485 | orchestrator | 2025-09-02 00:57:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:50.652344 | orchestrator | 2025-09-02 00:57:50 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:50.655392 | orchestrator | 2025-09-02 00:57:50 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state STARTED 2025-09-02 00:57:50.658504 | orchestrator | 2025-09-02 00:57:50 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:57:50.659057 | orchestrator | 2025-09-02 00:57:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:53.702498 | orchestrator | 2025-09-02 00:57:53 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:53.704995 | orchestrator | 2025-09-02 00:57:53 | INFO  | Task 63f03935-20f3-4420-a5d4-07a49a66391d is in state SUCCESS 2025-09-02 00:57:53.707504 | orchestrator | 2025-09-02 00:57:53.707665 | orchestrator | 2025-09-02 00:57:53.707686 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-02 00:57:53.707698 | orchestrator | 2025-09-02 00:57:53.707710 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-02 00:57:53.707721 | orchestrator | Tuesday 02 September 2025 00:57:20 +0000 (0:00:00.158) 0:00:00.159 ***** 2025-09-02 00:57:53.707733 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-02 00:57:53.707745 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-02 00:57:53.707755 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-02 00:57:53.707766 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-02 00:57:53.707776 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-02 00:57:53.707787 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-02 00:57:53.707798 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-02 00:57:53.707808 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-02 00:57:53.707819 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-02 00:57:53.707829 | orchestrator | 2025-09-02 00:57:53.707840 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-02 00:57:53.707851 | orchestrator | Tuesday 02 September 2025 00:57:25 +0000 (0:00:04.437) 0:00:04.596 ***** 2025-09-02 00:57:53.707862 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-02 00:57:53.707873 | orchestrator | 2025-09-02 00:57:53.707884 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-02 00:57:53.707895 | orchestrator | Tuesday 02 September 2025 00:57:26 +0000 (0:00:01.015) 0:00:05.612 ***** 2025-09-02 00:57:53.707906 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-02 00:57:53.707917 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-02 00:57:53.707927 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-02 00:57:53.707938 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-02 00:57:53.707949 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-02 00:57:53.707959 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-02 00:57:53.707970 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-02 00:57:53.707981 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-02 00:57:53.707991 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-02 00:57:53.708002 | orchestrator | 2025-09-02 00:57:53.708012 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-02 00:57:53.708047 | orchestrator | Tuesday 02 September 2025 00:57:39 +0000 (0:00:13.228) 0:00:18.841 ***** 2025-09-02 00:57:53.708073 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-02 00:57:53.708085 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-02 00:57:53.708096 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-02 00:57:53.708106 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-02 00:57:53.708117 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-02 00:57:53.708127 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-02 00:57:53.708138 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-02 00:57:53.708149 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-02 00:57:53.708159 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-02 00:57:53.708170 | orchestrator | 2025-09-02 00:57:53.708181 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:57:53.708192 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 00:57:53.708204 | orchestrator | 2025-09-02 00:57:53.708215 | orchestrator | 2025-09-02 00:57:53.708226 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:57:53.708236 | orchestrator | Tuesday 02 September 2025 00:57:46 +0000 (0:00:06.660) 0:00:25.501 ***** 2025-09-02 00:57:53.708247 | orchestrator | =============================================================================== 2025-09-02 00:57:53.708258 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.23s 2025-09-02 00:57:53.708269 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.66s 2025-09-02 00:57:53.708279 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.44s 2025-09-02 00:57:53.708290 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2025-09-02 00:57:53.708301 | orchestrator | 2025-09-02 00:57:53.708312 | orchestrator | 2025-09-02 00:57:53.708323 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:57:53.708334 | orchestrator | 2025-09-02 00:57:53.708359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:57:53.708370 | orchestrator | Tuesday 02 September 2025 00:56:11 +0000 (0:00:00.275) 0:00:00.275 ***** 2025-09-02 00:57:53.708386 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.708406 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.708424 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.708443 | orchestrator | 2025-09-02 00:57:53.708460 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:57:53.708478 | orchestrator | Tuesday 02 September 2025 00:56:12 +0000 (0:00:00.319) 0:00:00.594 ***** 2025-09-02 00:57:53.708495 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-02 00:57:53.708514 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-02 00:57:53.708530 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-02 00:57:53.708580 | orchestrator | 2025-09-02 00:57:53.708598 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-02 00:57:53.708615 | orchestrator | 2025-09-02 00:57:53.708632 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-02 00:57:53.708649 | orchestrator | Tuesday 02 September 2025 00:56:12 +0000 (0:00:00.512) 0:00:01.106 ***** 2025-09-02 00:57:53.708665 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:57:53.708682 | orchestrator | 2025-09-02 00:57:53.708700 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-02 00:57:53.708718 | orchestrator | Tuesday 02 September 2025 00:56:13 +0000 (0:00:00.520) 0:00:01.627 ***** 2025-09-02 00:57:53.708771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:57:53.708815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:57:53.708844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:57:53.708857 | orchestrator | 2025-09-02 00:57:53.708869 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-02 00:57:53.708880 | orchestrator | Tuesday 02 September 2025 00:56:14 +0000 (0:00:01.166) 0:00:02.794 ***** 2025-09-02 00:57:53.708890 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.708901 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.708912 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.708923 | orchestrator | 2025-09-02 00:57:53.708934 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-02 00:57:53.708944 | orchestrator | Tuesday 02 September 2025 00:56:14 +0000 (0:00:00.438) 0:00:03.232 ***** 2025-09-02 00:57:53.708955 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-02 00:57:53.708966 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-02 00:57:53.708983 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-02 00:57:53.708994 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-02 00:57:53.709005 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-02 00:57:53.709016 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-02 00:57:53.709027 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-02 00:57:53.709038 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-02 00:57:53.709049 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-02 00:57:53.709066 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-02 00:57:53.709077 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-02 00:57:53.709088 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-02 00:57:53.709099 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-02 00:57:53.709110 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-02 00:57:53.709121 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-02 00:57:53.709131 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-02 00:57:53.709142 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-02 00:57:53.709153 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-02 00:57:53.709164 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-02 00:57:53.709175 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-02 00:57:53.709186 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-02 00:57:53.709196 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-02 00:57:53.709207 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-02 00:57:53.709218 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-02 00:57:53.709230 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-02 00:57:53.709243 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-02 00:57:53.709254 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-02 00:57:53.709265 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-02 00:57:53.709281 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-02 00:57:53.709292 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-02 00:57:53.709303 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-02 00:57:53.709314 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-02 00:57:53.709325 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-02 00:57:53.709337 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-02 00:57:53.709347 | orchestrator | 2025-09-02 00:57:53.709359 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.709370 | orchestrator | Tuesday 02 September 2025 00:56:15 +0000 (0:00:00.826) 0:00:04.059 ***** 2025-09-02 00:57:53.709381 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.709391 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.709409 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.709420 | orchestrator | 2025-09-02 00:57:53.709431 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.709441 | orchestrator | Tuesday 02 September 2025 00:56:15 +0000 (0:00:00.338) 0:00:04.397 ***** 2025-09-02 00:57:53.709453 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.709464 | orchestrator | 2025-09-02 00:57:53.709475 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.709491 | orchestrator | Tuesday 02 September 2025 00:56:15 +0000 (0:00:00.141) 0:00:04.539 ***** 2025-09-02 00:57:53.709502 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.709513 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.709524 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.709535 | orchestrator | 2025-09-02 00:57:53.709567 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.709579 | orchestrator | Tuesday 02 September 2025 00:56:16 +0000 (0:00:00.453) 0:00:04.992 ***** 2025-09-02 00:57:53.709590 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.709601 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.709612 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.709622 | orchestrator | 2025-09-02 00:57:53.709633 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.709644 | orchestrator | Tuesday 02 September 2025 00:56:16 +0000 (0:00:00.323) 0:00:05.316 ***** 2025-09-02 00:57:53.709655 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.709665 | orchestrator | 2025-09-02 00:57:53.709676 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.709687 | orchestrator | Tuesday 02 September 2025 00:56:16 +0000 (0:00:00.136) 0:00:05.452 ***** 2025-09-02 00:57:53.709698 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.709708 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.709719 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.709730 | orchestrator | 2025-09-02 00:57:53.709740 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.709751 | orchestrator | Tuesday 02 September 2025 00:56:17 +0000 (0:00:00.301) 0:00:05.753 ***** 2025-09-02 00:57:53.709762 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.709773 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.709783 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.709794 | orchestrator | 2025-09-02 00:57:53.709805 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.709816 | orchestrator | Tuesday 02 September 2025 00:56:17 +0000 (0:00:00.301) 0:00:06.055 ***** 2025-09-02 00:57:53.709827 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.709838 | orchestrator | 2025-09-02 00:57:53.709848 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.709859 | orchestrator | Tuesday 02 September 2025 00:56:17 +0000 (0:00:00.132) 0:00:06.187 ***** 2025-09-02 00:57:53.709870 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.709881 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.709891 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.709902 | orchestrator | 2025-09-02 00:57:53.709913 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.709923 | orchestrator | Tuesday 02 September 2025 00:56:18 +0000 (0:00:00.553) 0:00:06.741 ***** 2025-09-02 00:57:53.709934 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.709945 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.709956 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.709966 | orchestrator | 2025-09-02 00:57:53.709977 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.709988 | orchestrator | Tuesday 02 September 2025 00:56:18 +0000 (0:00:00.293) 0:00:07.034 ***** 2025-09-02 00:57:53.709998 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710009 | orchestrator | 2025-09-02 00:57:53.710068 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.710086 | orchestrator | Tuesday 02 September 2025 00:56:18 +0000 (0:00:00.149) 0:00:07.184 ***** 2025-09-02 00:57:53.710098 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710109 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.710119 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.710130 | orchestrator | 2025-09-02 00:57:53.710141 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.710151 | orchestrator | Tuesday 02 September 2025 00:56:18 +0000 (0:00:00.303) 0:00:07.488 ***** 2025-09-02 00:57:53.710162 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.710178 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.710189 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.710200 | orchestrator | 2025-09-02 00:57:53.710211 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.710221 | orchestrator | Tuesday 02 September 2025 00:56:19 +0000 (0:00:00.300) 0:00:07.788 ***** 2025-09-02 00:57:53.710232 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710243 | orchestrator | 2025-09-02 00:57:53.710254 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.710264 | orchestrator | Tuesday 02 September 2025 00:56:19 +0000 (0:00:00.309) 0:00:08.098 ***** 2025-09-02 00:57:53.710275 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710286 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.710296 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.710307 | orchestrator | 2025-09-02 00:57:53.710318 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.710328 | orchestrator | Tuesday 02 September 2025 00:56:19 +0000 (0:00:00.298) 0:00:08.397 ***** 2025-09-02 00:57:53.710339 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.710350 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.710361 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.710371 | orchestrator | 2025-09-02 00:57:53.710382 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.710393 | orchestrator | Tuesday 02 September 2025 00:56:20 +0000 (0:00:00.333) 0:00:08.730 ***** 2025-09-02 00:57:53.710404 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710414 | orchestrator | 2025-09-02 00:57:53.710425 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.710436 | orchestrator | Tuesday 02 September 2025 00:56:20 +0000 (0:00:00.124) 0:00:08.855 ***** 2025-09-02 00:57:53.710447 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710457 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.710468 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.710478 | orchestrator | 2025-09-02 00:57:53.710489 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.710500 | orchestrator | Tuesday 02 September 2025 00:56:20 +0000 (0:00:00.297) 0:00:09.153 ***** 2025-09-02 00:57:53.710511 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.710522 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.710533 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.710543 | orchestrator | 2025-09-02 00:57:53.710587 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.710598 | orchestrator | Tuesday 02 September 2025 00:56:21 +0000 (0:00:00.502) 0:00:09.655 ***** 2025-09-02 00:57:53.710609 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710620 | orchestrator | 2025-09-02 00:57:53.710631 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.710641 | orchestrator | Tuesday 02 September 2025 00:56:21 +0000 (0:00:00.144) 0:00:09.799 ***** 2025-09-02 00:57:53.710652 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710663 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.710674 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.710685 | orchestrator | 2025-09-02 00:57:53.710695 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.710721 | orchestrator | Tuesday 02 September 2025 00:56:21 +0000 (0:00:00.294) 0:00:10.094 ***** 2025-09-02 00:57:53.710732 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.710743 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.710754 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.710764 | orchestrator | 2025-09-02 00:57:53.710775 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.710786 | orchestrator | Tuesday 02 September 2025 00:56:21 +0000 (0:00:00.389) 0:00:10.484 ***** 2025-09-02 00:57:53.710796 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710807 | orchestrator | 2025-09-02 00:57:53.710818 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.710829 | orchestrator | Tuesday 02 September 2025 00:56:22 +0000 (0:00:00.116) 0:00:10.600 ***** 2025-09-02 00:57:53.710840 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710850 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.710861 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.710872 | orchestrator | 2025-09-02 00:57:53.710883 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.710894 | orchestrator | Tuesday 02 September 2025 00:56:22 +0000 (0:00:00.274) 0:00:10.874 ***** 2025-09-02 00:57:53.710905 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.710916 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.710926 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.710937 | orchestrator | 2025-09-02 00:57:53.710948 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.710959 | orchestrator | Tuesday 02 September 2025 00:56:22 +0000 (0:00:00.542) 0:00:11.417 ***** 2025-09-02 00:57:53.710970 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.710981 | orchestrator | 2025-09-02 00:57:53.710992 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.711003 | orchestrator | Tuesday 02 September 2025 00:56:23 +0000 (0:00:00.132) 0:00:11.549 ***** 2025-09-02 00:57:53.711014 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.711024 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.711035 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.711046 | orchestrator | 2025-09-02 00:57:53.711057 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-02 00:57:53.711068 | orchestrator | Tuesday 02 September 2025 00:56:23 +0000 (0:00:00.284) 0:00:11.833 ***** 2025-09-02 00:57:53.711079 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:57:53.711090 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:57:53.711101 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:57:53.711111 | orchestrator | 2025-09-02 00:57:53.711122 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-02 00:57:53.711133 | orchestrator | Tuesday 02 September 2025 00:56:23 +0000 (0:00:00.344) 0:00:12.178 ***** 2025-09-02 00:57:53.711144 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.711155 | orchestrator | 2025-09-02 00:57:53.711166 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-02 00:57:53.711182 | orchestrator | Tuesday 02 September 2025 00:56:23 +0000 (0:00:00.139) 0:00:12.317 ***** 2025-09-02 00:57:53.711193 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.711204 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.711215 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.711226 | orchestrator | 2025-09-02 00:57:53.711237 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-02 00:57:53.711248 | orchestrator | Tuesday 02 September 2025 00:56:24 +0000 (0:00:00.513) 0:00:12.831 ***** 2025-09-02 00:57:53.711259 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:57:53.711270 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:57:53.711280 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:57:53.711291 | orchestrator | 2025-09-02 00:57:53.711302 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-02 00:57:53.711313 | orchestrator | Tuesday 02 September 2025 00:56:25 +0000 (0:00:01.669) 0:00:14.500 ***** 2025-09-02 00:57:53.711330 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-02 00:57:53.711341 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-02 00:57:53.711352 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-02 00:57:53.711363 | orchestrator | 2025-09-02 00:57:53.711374 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-02 00:57:53.711385 | orchestrator | Tuesday 02 September 2025 00:56:27 +0000 (0:00:01.934) 0:00:16.434 ***** 2025-09-02 00:57:53.711396 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-02 00:57:53.711407 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-02 00:57:53.711418 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-02 00:57:53.711429 | orchestrator | 2025-09-02 00:57:53.711439 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-02 00:57:53.711450 | orchestrator | Tuesday 02 September 2025 00:56:30 +0000 (0:00:02.136) 0:00:18.570 ***** 2025-09-02 00:57:53.711467 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-02 00:57:53.711478 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-02 00:57:53.711489 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-02 00:57:53.711500 | orchestrator | 2025-09-02 00:57:53.711511 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-02 00:57:53.711522 | orchestrator | Tuesday 02 September 2025 00:56:31 +0000 (0:00:01.879) 0:00:20.450 ***** 2025-09-02 00:57:53.711533 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.711544 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.711570 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.711581 | orchestrator | 2025-09-02 00:57:53.711592 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-02 00:57:53.711603 | orchestrator | Tuesday 02 September 2025 00:56:32 +0000 (0:00:00.324) 0:00:20.774 ***** 2025-09-02 00:57:53.711614 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.711625 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.711635 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.711646 | orchestrator | 2025-09-02 00:57:53.711657 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-02 00:57:53.711668 | orchestrator | Tuesday 02 September 2025 00:56:32 +0000 (0:00:00.308) 0:00:21.083 ***** 2025-09-02 00:57:53.711678 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:57:53.711689 | orchestrator | 2025-09-02 00:57:53.711700 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-02 00:57:53.711711 | orchestrator | Tuesday 02 September 2025 00:56:33 +0000 (0:00:00.569) 0:00:21.653 ***** 2025-09-02 00:57:53.711729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:57:53.711761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:57:53.711781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:57:53.711800 | orchestrator | 2025-09-02 00:57:53.711811 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-02 00:57:53.711822 | orchestrator | Tuesday 02 September 2025 00:56:34 +0000 (0:00:01.831) 0:00:23.484 ***** 2025-09-02 00:57:53.712134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-02 00:57:53.712164 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.712184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-02 00:57:53.712204 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.712217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-02 00:57:53.712235 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.712246 | orchestrator | 2025-09-02 00:57:53.712257 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-02 00:57:53.712268 | orchestrator | Tuesday 02 September 2025 00:56:35 +0000 (0:00:00.705) 0:00:24.190 ***** 2025-09-02 00:57:53.712292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-02 00:57:53.712305 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.712323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-02 00:57:53.712341 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.712361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-02 00:57:53.712373 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.712384 | orchestrator | 2025-09-02 00:57:53.712395 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-02 00:57:53.712406 | orchestrator | Tuesday 02 September 2025 00:56:36 +0000 (0:00:00.905) 0:00:25.095 ***** 2025-09-02 00:57:53.712423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:57:53.712450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:57:53.712473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-02 00:57:53.712493 | orchestrator | 2025-09-02 00:57:53.712504 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-02 00:57:53.712515 | orchestrator | Tuesday 02 September 2025 00:56:38 +0000 (0:00:01.576) 0:00:26.672 ***** 2025-09-02 00:57:53.712526 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:57:53.712537 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:57:53.712572 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:57:53.712584 | orchestrator | 2025-09-02 00:57:53.712594 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-02 00:57:53.712605 | orchestrator | Tuesday 02 September 2025 00:56:38 +0000 (0:00:00.293) 0:00:26.965 ***** 2025-09-02 00:57:53.712616 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:57:53.712627 | orchestrator | 2025-09-02 00:57:53.712638 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-02 00:57:53.712648 | orchestrator | Tuesday 02 September 2025 00:56:38 +0000 (0:00:00.513) 0:00:27.479 ***** 2025-09-02 00:57:53.712659 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:57:53.712670 | orchestrator | 2025-09-02 00:57:53.712686 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-02 00:57:53.712697 | orchestrator | Tuesday 02 September 2025 00:56:41 +0000 (0:00:02.090) 0:00:29.569 ***** 2025-09-02 00:57:53.712708 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:57:53.712719 | orchestrator | 2025-09-02 00:57:53.712730 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-02 00:57:53.712742 | orchestrator | Tuesday 02 September 2025 00:56:43 +0000 (0:00:02.687) 0:00:32.257 ***** 2025-09-02 00:57:53.712754 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:57:53.712767 | orchestrator | 2025-09-02 00:57:53.712779 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-02 00:57:53.712792 | orchestrator | Tuesday 02 September 2025 00:56:59 +0000 (0:00:15.572) 0:00:47.830 ***** 2025-09-02 00:57:53.712805 | orchestrator | 2025-09-02 00:57:53.712824 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-02 00:57:53.712837 | orchestrator | Tuesday 02 September 2025 00:56:59 +0000 (0:00:00.068) 0:00:47.898 ***** 2025-09-02 00:57:53.712849 | orchestrator | 2025-09-02 00:57:53.712862 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-02 00:57:53.712874 | orchestrator | Tuesday 02 September 2025 00:56:59 +0000 (0:00:00.061) 0:00:47.960 ***** 2025-09-02 00:57:53.712887 | orchestrator | 2025-09-02 00:57:53.712899 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-02 00:57:53.712911 | orchestrator | Tuesday 02 September 2025 00:56:59 +0000 (0:00:00.076) 0:00:48.036 ***** 2025-09-02 00:57:53.712923 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:57:53.712936 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:57:53.712948 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:57:53.712960 | orchestrator | 2025-09-02 00:57:53.712972 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:57:53.712985 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-02 00:57:53.712998 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-02 00:57:53.713011 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-02 00:57:53.713024 | orchestrator | 2025-09-02 00:57:53.713036 | orchestrator | 2025-09-02 00:57:53.713049 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:57:53.713061 | orchestrator | Tuesday 02 September 2025 00:57:52 +0000 (0:00:52.979) 0:01:41.016 ***** 2025-09-02 00:57:53.713073 | orchestrator | =============================================================================== 2025-09-02 00:57:53.713085 | orchestrator | horizon : Restart horizon container ------------------------------------ 52.98s 2025-09-02 00:57:53.713098 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.57s 2025-09-02 00:57:53.713108 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.69s 2025-09-02 00:57:53.713119 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.14s 2025-09-02 00:57:53.713130 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.09s 2025-09-02 00:57:53.713140 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.93s 2025-09-02 00:57:53.713151 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.88s 2025-09-02 00:57:53.713167 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.83s 2025-09-02 00:57:53.713177 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.67s 2025-09-02 00:57:53.713188 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.58s 2025-09-02 00:57:53.713199 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.17s 2025-09-02 00:57:53.713210 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.91s 2025-09-02 00:57:53.713220 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2025-09-02 00:57:53.713231 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2025-09-02 00:57:53.713242 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2025-09-02 00:57:53.713253 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2025-09-02 00:57:53.713264 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-09-02 00:57:53.713274 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-09-02 00:57:53.713285 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-09-02 00:57:53.713307 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-09-02 00:57:53.713318 | orchestrator | 2025-09-02 00:57:53 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:57:53.713329 | orchestrator | 2025-09-02 00:57:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:56.763806 | orchestrator | 2025-09-02 00:57:56 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:56.768868 | orchestrator | 2025-09-02 00:57:56 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:57:56.769182 | orchestrator | 2025-09-02 00:57:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:57:59.814372 | orchestrator | 2025-09-02 00:57:59 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:57:59.818836 | orchestrator | 2025-09-02 00:57:59 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:57:59.818876 | orchestrator | 2025-09-02 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:02.860720 | orchestrator | 2025-09-02 00:58:02 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:02.862345 | orchestrator | 2025-09-02 00:58:02 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:02.862380 | orchestrator | 2025-09-02 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:05.900390 | orchestrator | 2025-09-02 00:58:05 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:05.901793 | orchestrator | 2025-09-02 00:58:05 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:05.901833 | orchestrator | 2025-09-02 00:58:05 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:08.945114 | orchestrator | 2025-09-02 00:58:08 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:08.945926 | orchestrator | 2025-09-02 00:58:08 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:08.946279 | orchestrator | 2025-09-02 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:11.994200 | orchestrator | 2025-09-02 00:58:11 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:11.998823 | orchestrator | 2025-09-02 00:58:11 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:11.998856 | orchestrator | 2025-09-02 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:15.047029 | orchestrator | 2025-09-02 00:58:15 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:15.048803 | orchestrator | 2025-09-02 00:58:15 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:15.048832 | orchestrator | 2025-09-02 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:18.089510 | orchestrator | 2025-09-02 00:58:18 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:18.091250 | orchestrator | 2025-09-02 00:58:18 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:18.091288 | orchestrator | 2025-09-02 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:21.135964 | orchestrator | 2025-09-02 00:58:21 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:21.138296 | orchestrator | 2025-09-02 00:58:21 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:21.138350 | orchestrator | 2025-09-02 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:24.185901 | orchestrator | 2025-09-02 00:58:24 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:24.187529 | orchestrator | 2025-09-02 00:58:24 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:24.187772 | orchestrator | 2025-09-02 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:27.229453 | orchestrator | 2025-09-02 00:58:27 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:27.231864 | orchestrator | 2025-09-02 00:58:27 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:27.231899 | orchestrator | 2025-09-02 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:30.279432 | orchestrator | 2025-09-02 00:58:30 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:30.281385 | orchestrator | 2025-09-02 00:58:30 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:30.281415 | orchestrator | 2025-09-02 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:33.324490 | orchestrator | 2025-09-02 00:58:33 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:33.326301 | orchestrator | 2025-09-02 00:58:33 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:33.326334 | orchestrator | 2025-09-02 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:36.364679 | orchestrator | 2025-09-02 00:58:36 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:36.365819 | orchestrator | 2025-09-02 00:58:36 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:36.365849 | orchestrator | 2025-09-02 00:58:36 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:39.419471 | orchestrator | 2025-09-02 00:58:39 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:39.420214 | orchestrator | 2025-09-02 00:58:39 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:39.420242 | orchestrator | 2025-09-02 00:58:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:42.460914 | orchestrator | 2025-09-02 00:58:42 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:42.463730 | orchestrator | 2025-09-02 00:58:42 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state STARTED 2025-09-02 00:58:42.463764 | orchestrator | 2025-09-02 00:58:42 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:45.507990 | orchestrator | 2025-09-02 00:58:45 | INFO  | Task b7bc5a84-fd43-478e-94b1-6f3589d60154 is in state STARTED 2025-09-02 00:58:45.510509 | orchestrator | 2025-09-02 00:58:45 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state STARTED 2025-09-02 00:58:45.513219 | orchestrator | 2025-09-02 00:58:45 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:58:45.515154 | orchestrator | 2025-09-02 00:58:45 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:58:45.517555 | orchestrator | 2025-09-02 00:58:45 | INFO  | Task 16b3994a-fb32-404d-a19b-faa7e8a94840 is in state SUCCESS 2025-09-02 00:58:45.517983 | orchestrator | 2025-09-02 00:58:45 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:48.552219 | orchestrator | 2025-09-02 00:58:48 | INFO  | Task b7bc5a84-fd43-478e-94b1-6f3589d60154 is in state STARTED 2025-09-02 00:58:48.553835 | orchestrator | 2025-09-02 00:58:48 | INFO  | Task 8cd8b941-cb12-42f4-b464-084c6e1728ee is in state SUCCESS 2025-09-02 00:58:48.555511 | orchestrator | 2025-09-02 00:58:48.555558 | orchestrator | 2025-09-02 00:58:48.555999 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-02 00:58:48.556018 | orchestrator | 2025-09-02 00:58:48.556030 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-02 00:58:48.556041 | orchestrator | Tuesday 02 September 2025 00:57:50 +0000 (0:00:00.245) 0:00:00.245 ***** 2025-09-02 00:58:48.556052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-02 00:58:48.556064 | orchestrator | 2025-09-02 00:58:48.556075 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-02 00:58:48.556086 | orchestrator | Tuesday 02 September 2025 00:57:51 +0000 (0:00:00.253) 0:00:00.499 ***** 2025-09-02 00:58:48.556097 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-02 00:58:48.556122 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-02 00:58:48.556134 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-02 00:58:48.556145 | orchestrator | 2025-09-02 00:58:48.556156 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-02 00:58:48.556166 | orchestrator | Tuesday 02 September 2025 00:57:52 +0000 (0:00:01.215) 0:00:01.715 ***** 2025-09-02 00:58:48.556177 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-02 00:58:48.556189 | orchestrator | 2025-09-02 00:58:48.556199 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-02 00:58:48.556210 | orchestrator | Tuesday 02 September 2025 00:57:53 +0000 (0:00:01.164) 0:00:02.879 ***** 2025-09-02 00:58:48.556221 | orchestrator | changed: [testbed-manager] 2025-09-02 00:58:48.556231 | orchestrator | 2025-09-02 00:58:48.556243 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-02 00:58:48.556254 | orchestrator | Tuesday 02 September 2025 00:57:54 +0000 (0:00:01.043) 0:00:03.922 ***** 2025-09-02 00:58:48.556265 | orchestrator | changed: [testbed-manager] 2025-09-02 00:58:48.556276 | orchestrator | 2025-09-02 00:58:48.556287 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-02 00:58:48.556298 | orchestrator | Tuesday 02 September 2025 00:57:55 +0000 (0:00:00.897) 0:00:04.820 ***** 2025-09-02 00:58:48.556309 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-02 00:58:48.556320 | orchestrator | ok: [testbed-manager] 2025-09-02 00:58:48.556331 | orchestrator | 2025-09-02 00:58:48.556342 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-02 00:58:48.556353 | orchestrator | Tuesday 02 September 2025 00:58:32 +0000 (0:00:36.774) 0:00:41.594 ***** 2025-09-02 00:58:48.556364 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-02 00:58:48.556375 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-02 00:58:48.556386 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-02 00:58:48.556397 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-02 00:58:48.556408 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-02 00:58:48.556479 | orchestrator | 2025-09-02 00:58:48.556494 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-02 00:58:48.556505 | orchestrator | Tuesday 02 September 2025 00:58:36 +0000 (0:00:04.124) 0:00:45.719 ***** 2025-09-02 00:58:48.556516 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-02 00:58:48.556526 | orchestrator | 2025-09-02 00:58:48.556537 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-02 00:58:48.556548 | orchestrator | Tuesday 02 September 2025 00:58:36 +0000 (0:00:00.466) 0:00:46.186 ***** 2025-09-02 00:58:48.556558 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:58:48.556569 | orchestrator | 2025-09-02 00:58:48.556630 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-02 00:58:48.556643 | orchestrator | Tuesday 02 September 2025 00:58:36 +0000 (0:00:00.141) 0:00:46.327 ***** 2025-09-02 00:58:48.556654 | orchestrator | skipping: [testbed-manager] 2025-09-02 00:58:48.556665 | orchestrator | 2025-09-02 00:58:48.556676 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-02 00:58:48.556687 | orchestrator | Tuesday 02 September 2025 00:58:37 +0000 (0:00:00.300) 0:00:46.628 ***** 2025-09-02 00:58:48.556697 | orchestrator | changed: [testbed-manager] 2025-09-02 00:58:48.556708 | orchestrator | 2025-09-02 00:58:48.556719 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-02 00:58:48.556730 | orchestrator | Tuesday 02 September 2025 00:58:39 +0000 (0:00:02.088) 0:00:48.716 ***** 2025-09-02 00:58:48.556740 | orchestrator | changed: [testbed-manager] 2025-09-02 00:58:48.556751 | orchestrator | 2025-09-02 00:58:48.556762 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-02 00:58:48.556773 | orchestrator | Tuesday 02 September 2025 00:58:40 +0000 (0:00:00.784) 0:00:49.501 ***** 2025-09-02 00:58:48.556784 | orchestrator | changed: [testbed-manager] 2025-09-02 00:58:48.556794 | orchestrator | 2025-09-02 00:58:48.556805 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-02 00:58:48.556816 | orchestrator | Tuesday 02 September 2025 00:58:40 +0000 (0:00:00.652) 0:00:50.154 ***** 2025-09-02 00:58:48.556827 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-02 00:58:48.556838 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-02 00:58:48.556849 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-02 00:58:48.556859 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-02 00:58:48.556870 | orchestrator | 2025-09-02 00:58:48.556880 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:58:48.556892 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-02 00:58:48.556903 | orchestrator | 2025-09-02 00:58:48.556914 | orchestrator | 2025-09-02 00:58:48.556968 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:58:48.556981 | orchestrator | Tuesday 02 September 2025 00:58:42 +0000 (0:00:01.430) 0:00:51.584 ***** 2025-09-02 00:58:48.556993 | orchestrator | =============================================================================== 2025-09-02 00:58:48.557003 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.77s 2025-09-02 00:58:48.557014 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.12s 2025-09-02 00:58:48.557025 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.09s 2025-09-02 00:58:48.557036 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.43s 2025-09-02 00:58:48.557046 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-09-02 00:58:48.557057 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.16s 2025-09-02 00:58:48.557175 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.04s 2025-09-02 00:58:48.557198 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2025-09-02 00:58:48.557211 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2025-09-02 00:58:48.557223 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2025-09-02 00:58:48.557236 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2025-09-02 00:58:48.557248 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-09-02 00:58:48.557262 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2025-09-02 00:58:48.557274 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-09-02 00:58:48.557286 | orchestrator | 2025-09-02 00:58:48.557299 | orchestrator | 2025-09-02 00:58:48.557318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 00:58:48.557331 | orchestrator | 2025-09-02 00:58:48.557343 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 00:58:48.557356 | orchestrator | Tuesday 02 September 2025 00:56:11 +0000 (0:00:00.281) 0:00:00.281 ***** 2025-09-02 00:58:48.557368 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:58:48.557381 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:58:48.557393 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:58:48.557406 | orchestrator | 2025-09-02 00:58:48.557418 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 00:58:48.557430 | orchestrator | Tuesday 02 September 2025 00:56:12 +0000 (0:00:00.313) 0:00:00.594 ***** 2025-09-02 00:58:48.557442 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-02 00:58:48.557455 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-02 00:58:48.557468 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-02 00:58:48.557482 | orchestrator | 2025-09-02 00:58:48.557494 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-02 00:58:48.557504 | orchestrator | 2025-09-02 00:58:48.557515 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-02 00:58:48.557526 | orchestrator | Tuesday 02 September 2025 00:56:12 +0000 (0:00:00.447) 0:00:01.042 ***** 2025-09-02 00:58:48.557537 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:58:48.557548 | orchestrator | 2025-09-02 00:58:48.557675 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-02 00:58:48.557687 | orchestrator | Tuesday 02 September 2025 00:56:13 +0000 (0:00:00.555) 0:00:01.598 ***** 2025-09-02 00:58:48.557703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.557756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.557779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.557801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.557813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.557824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.557836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.557854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.557876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.557888 | orchestrator | 2025-09-02 00:58:48.557899 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-02 00:58:48.557910 | orchestrator | Tuesday 02 September 2025 00:56:14 +0000 (0:00:01.871) 0:00:03.469 ***** 2025-09-02 00:58:48.557922 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-02 00:58:48.557933 | orchestrator | 2025-09-02 00:58:48.557943 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-02 00:58:48.557954 | orchestrator | Tuesday 02 September 2025 00:56:15 +0000 (0:00:00.942) 0:00:04.412 ***** 2025-09-02 00:58:48.557965 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:58:48.557976 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:58:48.557987 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:58:48.557998 | orchestrator | 2025-09-02 00:58:48.558008 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-02 00:58:48.558078 | orchestrator | Tuesday 02 September 2025 00:56:16 +0000 (0:00:00.528) 0:00:04.940 ***** 2025-09-02 00:58:48.558089 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 00:58:48.558100 | orchestrator | 2025-09-02 00:58:48.558111 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-02 00:58:48.558122 | orchestrator | Tuesday 02 September 2025 00:56:17 +0000 (0:00:00.708) 0:00:05.649 ***** 2025-09-02 00:58:48.558133 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:58:48.558144 | orchestrator | 2025-09-02 00:58:48.558154 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-02 00:58:48.558165 | orchestrator | Tuesday 02 September 2025 00:56:17 +0000 (0:00:00.530) 0:00:06.179 ***** 2025-09-02 00:58:48.558177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.558198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.558223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.558236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558324 | orchestrator | 2025-09-02 00:58:48.558336 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-02 00:58:48.558349 | orchestrator | Tuesday 02 September 2025 00:56:20 +0000 (0:00:02.879) 0:00:09.059 ***** 2025-09-02 00:58:48.558362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-02 00:58:48.558376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.558389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:58:48.558401 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.558421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-02 00:58:48.558447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.558461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:58:48.558473 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.558487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-02 00:58:48.558501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.558514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:58:48.558533 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.558546 | orchestrator | 2025-09-02 00:58:48.558558 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-02 00:58:48.558571 | orchestrator | Tuesday 02 September 2025 00:56:21 +0000 (0:00:00.752) 0:00:09.811 ***** 2025-09-02 00:58:48.558648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-02 00:58:48.558666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.558679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:58:48.558690 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.558702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-02 00:58:48.558724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.558742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:58:48.558753 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.558769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-02 00:58:48.558781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.558793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-02 00:58:48.558804 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.558815 | orchestrator | 2025-09-02 00:58:48.558826 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-02 00:58:48.558837 | orchestrator | Tuesday 02 September 2025 00:56:22 +0000 (0:00:00.834) 0:00:10.646 ***** 2025-09-02 00:58:48.558848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.558873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.558890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.558902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.558989 | orchestrator | 2025-09-02 00:58:48.559000 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-02 00:58:48.559011 | orchestrator | Tuesday 02 September 2025 00:56:25 +0000 (0:00:03.155) 0:00:13.802 ***** 2025-09-02 00:58:48.559022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.559041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.559053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.559071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.559088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.559100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.559111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.559128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.559138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.559148 | orchestrator | 2025-09-02 00:58:48.559157 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-02 00:58:48.559167 | orchestrator | Tuesday 02 September 2025 00:56:30 +0000 (0:00:05.555) 0:00:19.357 ***** 2025-09-02 00:58:48.559177 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:58:48.559192 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:58:48.559202 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:58:48.559212 | orchestrator | 2025-09-02 00:58:48.559221 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-02 00:58:48.559231 | orchestrator | Tuesday 02 September 2025 00:56:32 +0000 (0:00:01.384) 0:00:20.741 ***** 2025-09-02 00:58:48.559241 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.559250 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.559260 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.559269 | orchestrator | 2025-09-02 00:58:48.559279 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-02 00:58:48.559288 | orchestrator | Tuesday 02 September 2025 00:56:32 +0000 (0:00:00.553) 0:00:21.295 ***** 2025-09-02 00:58:48.559298 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.559308 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.559317 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.559326 | orchestrator | 2025-09-02 00:58:48.559336 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-02 00:58:48.559349 | orchestrator | Tuesday 02 September 2025 00:56:33 +0000 (0:00:00.303) 0:00:21.598 ***** 2025-09-02 00:58:48.559359 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.559369 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.559379 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.559388 | orchestrator | 2025-09-02 00:58:48.559397 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-02 00:58:48.559407 | orchestrator | Tuesday 02 September 2025 00:56:33 +0000 (0:00:00.496) 0:00:22.095 ***** 2025-09-02 00:58:48.559418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.559436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.559447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.559462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.559477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.559493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-02 00:58:48.559503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.559513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.559524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.559533 | orchestrator | 2025-09-02 00:58:48.559543 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-02 00:58:48.559553 | orchestrator | Tuesday 02 September 2025 00:56:36 +0000 (0:00:02.464) 0:00:24.560 ***** 2025-09-02 00:58:48.559563 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.559572 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.559582 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.559592 | orchestrator | 2025-09-02 00:58:48.559616 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-02 00:58:48.559626 | orchestrator | Tuesday 02 September 2025 00:56:36 +0000 (0:00:00.334) 0:00:24.894 ***** 2025-09-02 00:58:48.559641 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-02 00:58:48.559652 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-02 00:58:48.559662 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-02 00:58:48.559671 | orchestrator | 2025-09-02 00:58:48.559681 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-02 00:58:48.559691 | orchestrator | Tuesday 02 September 2025 00:56:37 +0000 (0:00:01.541) 0:00:26.435 ***** 2025-09-02 00:58:48.559700 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 00:58:48.559710 | orchestrator | 2025-09-02 00:58:48.559726 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-02 00:58:48.559736 | orchestrator | Tuesday 02 September 2025 00:56:38 +0000 (0:00:00.892) 0:00:27.328 ***** 2025-09-02 00:58:48.559749 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.559759 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.559769 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.559778 | orchestrator | 2025-09-02 00:58:48.559788 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-02 00:58:48.559797 | orchestrator | Tuesday 02 September 2025 00:56:39 +0000 (0:00:00.832) 0:00:28.161 ***** 2025-09-02 00:58:48.559807 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 00:58:48.559817 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-02 00:58:48.559826 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-02 00:58:48.559835 | orchestrator | 2025-09-02 00:58:48.559845 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-02 00:58:48.559855 | orchestrator | Tuesday 02 September 2025 00:56:40 +0000 (0:00:01.036) 0:00:29.198 ***** 2025-09-02 00:58:48.559864 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:58:48.559874 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:58:48.559884 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:58:48.559894 | orchestrator | 2025-09-02 00:58:48.559903 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-02 00:58:48.559913 | orchestrator | Tuesday 02 September 2025 00:56:41 +0000 (0:00:00.316) 0:00:29.514 ***** 2025-09-02 00:58:48.559923 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-02 00:58:48.559932 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-02 00:58:48.559942 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-02 00:58:48.559951 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-02 00:58:48.559961 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-02 00:58:48.559971 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-02 00:58:48.559981 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-02 00:58:48.559991 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-02 00:58:48.560000 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-02 00:58:48.560010 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-02 00:58:48.560020 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-02 00:58:48.560029 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-02 00:58:48.560039 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-02 00:58:48.560049 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-02 00:58:48.560058 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-02 00:58:48.560068 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-02 00:58:48.560078 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-02 00:58:48.560087 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-02 00:58:48.560097 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-02 00:58:48.560107 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-02 00:58:48.560125 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-02 00:58:48.560135 | orchestrator | 2025-09-02 00:58:48.560145 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-02 00:58:48.560154 | orchestrator | Tuesday 02 September 2025 00:56:50 +0000 (0:00:09.202) 0:00:38.716 ***** 2025-09-02 00:58:48.560164 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-02 00:58:48.560173 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-02 00:58:48.560183 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-02 00:58:48.560197 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-02 00:58:48.560207 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-02 00:58:48.560217 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-02 00:58:48.560227 | orchestrator | 2025-09-02 00:58:48.560236 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-02 00:58:48.560246 | orchestrator | Tuesday 02 September 2025 00:56:53 +0000 (0:00:03.107) 0:00:41.824 ***** 2025-09-02 00:58:48.560260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.560272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.560362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-02 00:58:48.560384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.560404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.560419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-02 00:58:48.560431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.560441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.560452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-02 00:58:48.560468 | orchestrator | 2025-09-02 00:58:48.560479 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-02 00:58:48.560490 | orchestrator | Tuesday 02 September 2025 00:56:55 +0000 (0:00:02.378) 0:00:44.202 ***** 2025-09-02 00:58:48.560500 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.560511 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.560521 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.560532 | orchestrator | 2025-09-02 00:58:48.560542 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-02 00:58:48.560552 | orchestrator | Tuesday 02 September 2025 00:56:55 +0000 (0:00:00.302) 0:00:44.505 ***** 2025-09-02 00:58:48.560562 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:58:48.560573 | orchestrator | 2025-09-02 00:58:48.560583 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-02 00:58:48.560634 | orchestrator | Tuesday 02 September 2025 00:56:58 +0000 (0:00:02.262) 0:00:46.767 ***** 2025-09-02 00:58:48.560647 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:58:48.560657 | orchestrator | 2025-09-02 00:58:48.560666 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-02 00:58:48.560676 | orchestrator | Tuesday 02 September 2025 00:57:00 +0000 (0:00:02.137) 0:00:48.905 ***** 2025-09-02 00:58:48.560685 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:58:48.560695 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:58:48.560705 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:58:48.560714 | orchestrator | 2025-09-02 00:58:48.560724 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-02 00:58:48.560739 | orchestrator | Tuesday 02 September 2025 00:57:01 +0000 (0:00:00.979) 0:00:49.884 ***** 2025-09-02 00:58:48.560750 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:58:48.560759 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:58:48.560769 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:58:48.560779 | orchestrator | 2025-09-02 00:58:48.560788 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-02 00:58:48.560798 | orchestrator | Tuesday 02 September 2025 00:57:01 +0000 (0:00:00.565) 0:00:50.449 ***** 2025-09-02 00:58:48.560808 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.560818 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.560827 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.560837 | orchestrator | 2025-09-02 00:58:48.560846 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-02 00:58:48.560856 | orchestrator | Tuesday 02 September 2025 00:57:02 +0000 (0:00:00.339) 0:00:50.789 ***** 2025-09-02 00:58:48.560866 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:58:48.560876 | orchestrator | 2025-09-02 00:58:48.560885 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-02 00:58:48.560900 | orchestrator | Tuesday 02 September 2025 00:57:15 +0000 (0:00:13.648) 0:01:04.438 ***** 2025-09-02 00:58:48.560910 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:58:48.560920 | orchestrator | 2025-09-02 00:58:48.560929 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-02 00:58:48.560939 | orchestrator | Tuesday 02 September 2025 00:57:26 +0000 (0:00:10.156) 0:01:14.595 ***** 2025-09-02 00:58:48.560949 | orchestrator | 2025-09-02 00:58:48.560958 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-02 00:58:48.560968 | orchestrator | Tuesday 02 September 2025 00:57:26 +0000 (0:00:00.065) 0:01:14.660 ***** 2025-09-02 00:58:48.560977 | orchestrator | 2025-09-02 00:58:48.560987 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-02 00:58:48.560997 | orchestrator | Tuesday 02 September 2025 00:57:26 +0000 (0:00:00.064) 0:01:14.724 ***** 2025-09-02 00:58:48.561006 | orchestrator | 2025-09-02 00:58:48.561016 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-02 00:58:48.561032 | orchestrator | Tuesday 02 September 2025 00:57:26 +0000 (0:00:00.069) 0:01:14.793 ***** 2025-09-02 00:58:48.561041 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:58:48.561051 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:58:48.561060 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:58:48.561068 | orchestrator | 2025-09-02 00:58:48.561076 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-02 00:58:48.561084 | orchestrator | Tuesday 02 September 2025 00:57:44 +0000 (0:00:17.764) 0:01:32.558 ***** 2025-09-02 00:58:48.561092 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:58:48.561100 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:58:48.561108 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:58:48.561116 | orchestrator | 2025-09-02 00:58:48.561123 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-02 00:58:48.561131 | orchestrator | Tuesday 02 September 2025 00:57:48 +0000 (0:00:04.790) 0:01:37.348 ***** 2025-09-02 00:58:48.561139 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:58:48.561147 | orchestrator | changed: [testbed-node-1] 2025-09-02 00:58:48.561155 | orchestrator | changed: [testbed-node-2] 2025-09-02 00:58:48.561163 | orchestrator | 2025-09-02 00:58:48.561171 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-02 00:58:48.561179 | orchestrator | Tuesday 02 September 2025 00:57:59 +0000 (0:00:11.156) 0:01:48.505 ***** 2025-09-02 00:58:48.561187 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 00:58:48.561195 | orchestrator | 2025-09-02 00:58:48.561202 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-02 00:58:48.561210 | orchestrator | Tuesday 02 September 2025 00:58:00 +0000 (0:00:00.763) 0:01:49.268 ***** 2025-09-02 00:58:48.561218 | orchestrator | ok: [testbed-node-1] 2025-09-02 00:58:48.561226 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:58:48.561234 | orchestrator | ok: [testbed-node-2] 2025-09-02 00:58:48.561242 | orchestrator | 2025-09-02 00:58:48.561250 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-02 00:58:48.561258 | orchestrator | Tuesday 02 September 2025 00:58:01 +0000 (0:00:00.702) 0:01:49.971 ***** 2025-09-02 00:58:48.561266 | orchestrator | changed: [testbed-node-0] 2025-09-02 00:58:48.561274 | orchestrator | 2025-09-02 00:58:48.561281 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-02 00:58:48.561289 | orchestrator | Tuesday 02 September 2025 00:58:03 +0000 (0:00:01.835) 0:01:51.807 ***** 2025-09-02 00:58:48.561297 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-02 00:58:48.561305 | orchestrator | 2025-09-02 00:58:48.561313 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-02 00:58:48.561321 | orchestrator | Tuesday 02 September 2025 00:58:13 +0000 (0:00:10.479) 0:02:02.286 ***** 2025-09-02 00:58:48.561329 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-02 00:58:48.561337 | orchestrator | 2025-09-02 00:58:48.561345 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-02 00:58:48.561353 | orchestrator | Tuesday 02 September 2025 00:58:36 +0000 (0:00:22.465) 0:02:24.752 ***** 2025-09-02 00:58:48.561361 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-02 00:58:48.561369 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-02 00:58:48.561377 | orchestrator | 2025-09-02 00:58:48.561384 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-02 00:58:48.561393 | orchestrator | Tuesday 02 September 2025 00:58:43 +0000 (0:00:07.139) 0:02:31.892 ***** 2025-09-02 00:58:48.561400 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.561408 | orchestrator | 2025-09-02 00:58:48.561416 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-02 00:58:48.561424 | orchestrator | Tuesday 02 September 2025 00:58:43 +0000 (0:00:00.131) 0:02:32.023 ***** 2025-09-02 00:58:48.561436 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.561444 | orchestrator | 2025-09-02 00:58:48.561456 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-02 00:58:48.561464 | orchestrator | Tuesday 02 September 2025 00:58:43 +0000 (0:00:00.118) 0:02:32.142 ***** 2025-09-02 00:58:48.561472 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.561480 | orchestrator | 2025-09-02 00:58:48.561488 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-02 00:58:48.561496 | orchestrator | Tuesday 02 September 2025 00:58:43 +0000 (0:00:00.121) 0:02:32.263 ***** 2025-09-02 00:58:48.561504 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.561512 | orchestrator | 2025-09-02 00:58:48.561519 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-02 00:58:48.561527 | orchestrator | Tuesday 02 September 2025 00:58:44 +0000 (0:00:00.523) 0:02:32.787 ***** 2025-09-02 00:58:48.561535 | orchestrator | ok: [testbed-node-0] 2025-09-02 00:58:48.561543 | orchestrator | 2025-09-02 00:58:48.561551 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-02 00:58:48.561563 | orchestrator | Tuesday 02 September 2025 00:58:47 +0000 (0:00:03.169) 0:02:35.956 ***** 2025-09-02 00:58:48.561571 | orchestrator | skipping: [testbed-node-0] 2025-09-02 00:58:48.561579 | orchestrator | skipping: [testbed-node-1] 2025-09-02 00:58:48.561587 | orchestrator | skipping: [testbed-node-2] 2025-09-02 00:58:48.561607 | orchestrator | 2025-09-02 00:58:48.561615 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 00:58:48.561624 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-02 00:58:48.561632 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-02 00:58:48.561640 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-02 00:58:48.561648 | orchestrator | 2025-09-02 00:58:48.561656 | orchestrator | 2025-09-02 00:58:48.561663 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 00:58:48.561671 | orchestrator | Tuesday 02 September 2025 00:58:48 +0000 (0:00:00.587) 0:02:36.544 ***** 2025-09-02 00:58:48.561679 | orchestrator | =============================================================================== 2025-09-02 00:58:48.561687 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.47s 2025-09-02 00:58:48.561695 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 17.76s 2025-09-02 00:58:48.561703 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.65s 2025-09-02 00:58:48.561711 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.16s 2025-09-02 00:58:48.561718 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.48s 2025-09-02 00:58:48.561726 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.16s 2025-09-02 00:58:48.561734 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.20s 2025-09-02 00:58:48.561742 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.14s 2025-09-02 00:58:48.561750 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.56s 2025-09-02 00:58:48.561757 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.79s 2025-09-02 00:58:48.561765 | orchestrator | keystone : Creating default user role ----------------------------------- 3.17s 2025-09-02 00:58:48.561773 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.16s 2025-09-02 00:58:48.561781 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.11s 2025-09-02 00:58:48.561788 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.88s 2025-09-02 00:58:48.561801 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.46s 2025-09-02 00:58:48.561809 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.38s 2025-09-02 00:58:48.561817 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.26s 2025-09-02 00:58:48.561825 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.14s 2025-09-02 00:58:48.561833 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.87s 2025-09-02 00:58:48.561840 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.84s 2025-09-02 00:58:48.561848 | orchestrator | 2025-09-02 00:58:48 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:58:48.561856 | orchestrator | 2025-09-02 00:58:48 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:58:48.561864 | orchestrator | 2025-09-02 00:58:48 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:51.594376 | orchestrator | 2025-09-02 00:58:51 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:58:51.598124 | orchestrator | 2025-09-02 00:58:51 | INFO  | Task b7bc5a84-fd43-478e-94b1-6f3589d60154 is in state SUCCESS 2025-09-02 00:58:51.598785 | orchestrator | 2025-09-02 00:58:51 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:58:51.599638 | orchestrator | 2025-09-02 00:58:51 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:58:51.600452 | orchestrator | 2025-09-02 00:58:51 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:58:51.603012 | orchestrator | 2025-09-02 00:58:51 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:58:51.603091 | orchestrator | 2025-09-02 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:54.736973 | orchestrator | 2025-09-02 00:58:54 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:58:54.737064 | orchestrator | 2025-09-02 00:58:54 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:58:54.737080 | orchestrator | 2025-09-02 00:58:54 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:58:54.737106 | orchestrator | 2025-09-02 00:58:54 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:58:54.737119 | orchestrator | 2025-09-02 00:58:54 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:58:54.737130 | orchestrator | 2025-09-02 00:58:54 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:58:57.675246 | orchestrator | 2025-09-02 00:58:57 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:58:57.675873 | orchestrator | 2025-09-02 00:58:57 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:58:57.676526 | orchestrator | 2025-09-02 00:58:57 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:58:57.677355 | orchestrator | 2025-09-02 00:58:57 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:58:57.677818 | orchestrator | 2025-09-02 00:58:57 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:58:57.678083 | orchestrator | 2025-09-02 00:58:57 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:00.732648 | orchestrator | 2025-09-02 00:59:00 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:00.732726 | orchestrator | 2025-09-02 00:59:00 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:00.733835 | orchestrator | 2025-09-02 00:59:00 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:00.734666 | orchestrator | 2025-09-02 00:59:00 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:00.735427 | orchestrator | 2025-09-02 00:59:00 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:00.735446 | orchestrator | 2025-09-02 00:59:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:03.782268 | orchestrator | 2025-09-02 00:59:03 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:03.784844 | orchestrator | 2025-09-02 00:59:03 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:03.787693 | orchestrator | 2025-09-02 00:59:03 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:03.789825 | orchestrator | 2025-09-02 00:59:03 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:03.791635 | orchestrator | 2025-09-02 00:59:03 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:03.791660 | orchestrator | 2025-09-02 00:59:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:06.826319 | orchestrator | 2025-09-02 00:59:06 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:06.828469 | orchestrator | 2025-09-02 00:59:06 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:06.830174 | orchestrator | 2025-09-02 00:59:06 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:06.831812 | orchestrator | 2025-09-02 00:59:06 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:06.833408 | orchestrator | 2025-09-02 00:59:06 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:06.833434 | orchestrator | 2025-09-02 00:59:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:09.866755 | orchestrator | 2025-09-02 00:59:09 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:09.866973 | orchestrator | 2025-09-02 00:59:09 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:09.868094 | orchestrator | 2025-09-02 00:59:09 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:09.869523 | orchestrator | 2025-09-02 00:59:09 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:09.870158 | orchestrator | 2025-09-02 00:59:09 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:09.870183 | orchestrator | 2025-09-02 00:59:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:12.908691 | orchestrator | 2025-09-02 00:59:12 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:12.911374 | orchestrator | 2025-09-02 00:59:12 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:12.913597 | orchestrator | 2025-09-02 00:59:12 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:12.915519 | orchestrator | 2025-09-02 00:59:12 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:12.916781 | orchestrator | 2025-09-02 00:59:12 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:12.916805 | orchestrator | 2025-09-02 00:59:12 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:15.960239 | orchestrator | 2025-09-02 00:59:15 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:15.960963 | orchestrator | 2025-09-02 00:59:15 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:15.962386 | orchestrator | 2025-09-02 00:59:15 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:15.963492 | orchestrator | 2025-09-02 00:59:15 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:15.964949 | orchestrator | 2025-09-02 00:59:15 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:15.964971 | orchestrator | 2025-09-02 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:19.797517 | orchestrator | 2025-09-02 00:59:18 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:19.797647 | orchestrator | 2025-09-02 00:59:18 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:19.797667 | orchestrator | 2025-09-02 00:59:19 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:19.797678 | orchestrator | 2025-09-02 00:59:19 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:19.797689 | orchestrator | 2025-09-02 00:59:19 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:19.797700 | orchestrator | 2025-09-02 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:22.049099 | orchestrator | 2025-09-02 00:59:22 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:22.049431 | orchestrator | 2025-09-02 00:59:22 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:22.049820 | orchestrator | 2025-09-02 00:59:22 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:22.050466 | orchestrator | 2025-09-02 00:59:22 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:22.051049 | orchestrator | 2025-09-02 00:59:22 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:22.051071 | orchestrator | 2025-09-02 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:25.074350 | orchestrator | 2025-09-02 00:59:25 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:25.074446 | orchestrator | 2025-09-02 00:59:25 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:25.074605 | orchestrator | 2025-09-02 00:59:25 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:25.075314 | orchestrator | 2025-09-02 00:59:25 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:25.075856 | orchestrator | 2025-09-02 00:59:25 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:25.075881 | orchestrator | 2025-09-02 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:28.099793 | orchestrator | 2025-09-02 00:59:28 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:28.100274 | orchestrator | 2025-09-02 00:59:28 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:28.101744 | orchestrator | 2025-09-02 00:59:28 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:28.103421 | orchestrator | 2025-09-02 00:59:28 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:28.103980 | orchestrator | 2025-09-02 00:59:28 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:28.104003 | orchestrator | 2025-09-02 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:31.148593 | orchestrator | 2025-09-02 00:59:31 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:31.149468 | orchestrator | 2025-09-02 00:59:31 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:31.151448 | orchestrator | 2025-09-02 00:59:31 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:31.152796 | orchestrator | 2025-09-02 00:59:31 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:31.154783 | orchestrator | 2025-09-02 00:59:31 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:31.156447 | orchestrator | 2025-09-02 00:59:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:34.193222 | orchestrator | 2025-09-02 00:59:34 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:34.193948 | orchestrator | 2025-09-02 00:59:34 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:34.242547 | orchestrator | 2025-09-02 00:59:34 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:34.242685 | orchestrator | 2025-09-02 00:59:34 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:34.242710 | orchestrator | 2025-09-02 00:59:34 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:34.242730 | orchestrator | 2025-09-02 00:59:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:37.238699 | orchestrator | 2025-09-02 00:59:37 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:37.242171 | orchestrator | 2025-09-02 00:59:37 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:37.242525 | orchestrator | 2025-09-02 00:59:37 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:37.243152 | orchestrator | 2025-09-02 00:59:37 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:37.243856 | orchestrator | 2025-09-02 00:59:37 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:37.243878 | orchestrator | 2025-09-02 00:59:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:40.284163 | orchestrator | 2025-09-02 00:59:40 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:40.284347 | orchestrator | 2025-09-02 00:59:40 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:40.285222 | orchestrator | 2025-09-02 00:59:40 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:40.285907 | orchestrator | 2025-09-02 00:59:40 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:40.286708 | orchestrator | 2025-09-02 00:59:40 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:40.286733 | orchestrator | 2025-09-02 00:59:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:43.345213 | orchestrator | 2025-09-02 00:59:43 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:43.347192 | orchestrator | 2025-09-02 00:59:43 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:43.347885 | orchestrator | 2025-09-02 00:59:43 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:43.350089 | orchestrator | 2025-09-02 00:59:43 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:43.350829 | orchestrator | 2025-09-02 00:59:43 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:43.350857 | orchestrator | 2025-09-02 00:59:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:46.377837 | orchestrator | 2025-09-02 00:59:46 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:46.378184 | orchestrator | 2025-09-02 00:59:46 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:46.378597 | orchestrator | 2025-09-02 00:59:46 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:46.379122 | orchestrator | 2025-09-02 00:59:46 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:46.380729 | orchestrator | 2025-09-02 00:59:46 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:46.380748 | orchestrator | 2025-09-02 00:59:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:49.414142 | orchestrator | 2025-09-02 00:59:49 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:49.414243 | orchestrator | 2025-09-02 00:59:49 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:49.415897 | orchestrator | 2025-09-02 00:59:49 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:49.416286 | orchestrator | 2025-09-02 00:59:49 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:49.416879 | orchestrator | 2025-09-02 00:59:49 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:49.417032 | orchestrator | 2025-09-02 00:59:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:52.459567 | orchestrator | 2025-09-02 00:59:52 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:52.459979 | orchestrator | 2025-09-02 00:59:52 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:52.460809 | orchestrator | 2025-09-02 00:59:52 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:52.462798 | orchestrator | 2025-09-02 00:59:52 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:52.463483 | orchestrator | 2025-09-02 00:59:52 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:52.463732 | orchestrator | 2025-09-02 00:59:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:55.491377 | orchestrator | 2025-09-02 00:59:55 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:55.491904 | orchestrator | 2025-09-02 00:59:55 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:55.492393 | orchestrator | 2025-09-02 00:59:55 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:55.493114 | orchestrator | 2025-09-02 00:59:55 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:55.494820 | orchestrator | 2025-09-02 00:59:55 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:55.494850 | orchestrator | 2025-09-02 00:59:55 | INFO  | Wait 1 second(s) until the next check 2025-09-02 00:59:58.518817 | orchestrator | 2025-09-02 00:59:58 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 00:59:58.519128 | orchestrator | 2025-09-02 00:59:58 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 00:59:58.519833 | orchestrator | 2025-09-02 00:59:58 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 00:59:58.521295 | orchestrator | 2025-09-02 00:59:58 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 00:59:58.521873 | orchestrator | 2025-09-02 00:59:58 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 00:59:58.522069 | orchestrator | 2025-09-02 00:59:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:01.547155 | orchestrator | 2025-09-02 01:00:01 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:01.547254 | orchestrator | 2025-09-02 01:00:01 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:01.547906 | orchestrator | 2025-09-02 01:00:01 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 01:00:01.548487 | orchestrator | 2025-09-02 01:00:01 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:01.550168 | orchestrator | 2025-09-02 01:00:01 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:01.550194 | orchestrator | 2025-09-02 01:00:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:04.573051 | orchestrator | 2025-09-02 01:00:04 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:04.573658 | orchestrator | 2025-09-02 01:00:04 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:04.687154 | orchestrator | 2025-09-02 01:00:04 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 01:00:04.687219 | orchestrator | 2025-09-02 01:00:04 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:04.687232 | orchestrator | 2025-09-02 01:00:04 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:04.687244 | orchestrator | 2025-09-02 01:00:04 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:07.605704 | orchestrator | 2025-09-02 01:00:07 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:07.606119 | orchestrator | 2025-09-02 01:00:07 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:07.606699 | orchestrator | 2025-09-02 01:00:07 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state STARTED 2025-09-02 01:00:07.607415 | orchestrator | 2025-09-02 01:00:07 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:07.608171 | orchestrator | 2025-09-02 01:00:07 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:07.608306 | orchestrator | 2025-09-02 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:10.637851 | orchestrator | 2025-09-02 01:00:10 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:10.637940 | orchestrator | 2025-09-02 01:00:10 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:10.638393 | orchestrator | 2025-09-02 01:00:10 | INFO  | Task 550140b5-f2b8-496e-ada6-4232aa052c06 is in state SUCCESS 2025-09-02 01:00:10.639242 | orchestrator | 2025-09-02 01:00:10 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:10.639866 | orchestrator | 2025-09-02 01:00:10 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:10.639920 | orchestrator | 2025-09-02 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:13.666673 | orchestrator | 2025-09-02 01:00:13 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:13.666888 | orchestrator | 2025-09-02 01:00:13 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:13.668518 | orchestrator | 2025-09-02 01:00:13 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:13.669150 | orchestrator | 2025-09-02 01:00:13 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:13.669172 | orchestrator | 2025-09-02 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:16.692173 | orchestrator | 2025-09-02 01:00:16 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:16.692424 | orchestrator | 2025-09-02 01:00:16 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:16.692944 | orchestrator | 2025-09-02 01:00:16 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:16.693663 | orchestrator | 2025-09-02 01:00:16 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:16.693696 | orchestrator | 2025-09-02 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:19.880641 | orchestrator | 2025-09-02 01:00:19 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:19.880745 | orchestrator | 2025-09-02 01:00:19 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:19.880760 | orchestrator | 2025-09-02 01:00:19 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:19.880772 | orchestrator | 2025-09-02 01:00:19 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:19.880784 | orchestrator | 2025-09-02 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:22.750915 | orchestrator | 2025-09-02 01:00:22 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:22.751024 | orchestrator | 2025-09-02 01:00:22 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:22.752171 | orchestrator | 2025-09-02 01:00:22 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:22.754490 | orchestrator | 2025-09-02 01:00:22 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:22.754790 | orchestrator | 2025-09-02 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:25.779090 | orchestrator | 2025-09-02 01:00:25 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:25.779461 | orchestrator | 2025-09-02 01:00:25 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:25.780214 | orchestrator | 2025-09-02 01:00:25 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:25.780902 | orchestrator | 2025-09-02 01:00:25 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:25.780936 | orchestrator | 2025-09-02 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:28.815385 | orchestrator | 2025-09-02 01:00:28 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:28.816003 | orchestrator | 2025-09-02 01:00:28 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:28.818912 | orchestrator | 2025-09-02 01:00:28 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:28.821300 | orchestrator | 2025-09-02 01:00:28 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:28.821342 | orchestrator | 2025-09-02 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:31.855193 | orchestrator | 2025-09-02 01:00:31 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:31.855305 | orchestrator | 2025-09-02 01:00:31 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:31.855792 | orchestrator | 2025-09-02 01:00:31 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:31.856490 | orchestrator | 2025-09-02 01:00:31 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:31.856516 | orchestrator | 2025-09-02 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:34.892447 | orchestrator | 2025-09-02 01:00:34 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:34.892809 | orchestrator | 2025-09-02 01:00:34 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:34.893495 | orchestrator | 2025-09-02 01:00:34 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:34.894314 | orchestrator | 2025-09-02 01:00:34 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:34.894344 | orchestrator | 2025-09-02 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:37.923202 | orchestrator | 2025-09-02 01:00:37 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:37.923521 | orchestrator | 2025-09-02 01:00:37 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:37.924289 | orchestrator | 2025-09-02 01:00:37 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:37.924914 | orchestrator | 2025-09-02 01:00:37 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:37.925021 | orchestrator | 2025-09-02 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:40.957829 | orchestrator | 2025-09-02 01:00:40 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:40.957919 | orchestrator | 2025-09-02 01:00:40 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:40.958749 | orchestrator | 2025-09-02 01:00:40 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:40.959839 | orchestrator | 2025-09-02 01:00:40 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:40.959864 | orchestrator | 2025-09-02 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:43.993955 | orchestrator | 2025-09-02 01:00:43 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:43.994981 | orchestrator | 2025-09-02 01:00:43 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:43.995013 | orchestrator | 2025-09-02 01:00:43 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:43.995362 | orchestrator | 2025-09-02 01:00:43 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:43.995540 | orchestrator | 2025-09-02 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:47.044116 | orchestrator | 2025-09-02 01:00:47 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:47.048019 | orchestrator | 2025-09-02 01:00:47 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:47.050455 | orchestrator | 2025-09-02 01:00:47 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:47.053390 | orchestrator | 2025-09-02 01:00:47 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:47.053436 | orchestrator | 2025-09-02 01:00:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:50.104829 | orchestrator | 2025-09-02 01:00:50 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:50.106911 | orchestrator | 2025-09-02 01:00:50 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:50.108709 | orchestrator | 2025-09-02 01:00:50 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:50.109998 | orchestrator | 2025-09-02 01:00:50 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:50.110063 | orchestrator | 2025-09-02 01:00:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:53.156126 | orchestrator | 2025-09-02 01:00:53 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:53.156584 | orchestrator | 2025-09-02 01:00:53 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:53.157442 | orchestrator | 2025-09-02 01:00:53 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:53.158632 | orchestrator | 2025-09-02 01:00:53 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:53.158661 | orchestrator | 2025-09-02 01:00:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:56.228995 | orchestrator | 2025-09-02 01:00:56 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state STARTED 2025-09-02 01:00:56.229324 | orchestrator | 2025-09-02 01:00:56 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:56.230891 | orchestrator | 2025-09-02 01:00:56 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:56.231709 | orchestrator | 2025-09-02 01:00:56 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:56.231822 | orchestrator | 2025-09-02 01:00:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:00:59.362003 | orchestrator | 2025-09-02 01:00:59 | INFO  | Task f748ad8d-2028-4c2b-991b-46634f187ded is in state SUCCESS 2025-09-02 01:00:59.363294 | orchestrator | 2025-09-02 01:00:59.363335 | orchestrator | 2025-09-02 01:00:59.363348 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:00:59.363360 | orchestrator | 2025-09-02 01:00:59.363371 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:00:59.363382 | orchestrator | Tuesday 02 September 2025 00:58:46 +0000 (0:00:00.178) 0:00:00.178 ***** 2025-09-02 01:00:59.363495 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:00:59.363508 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:00:59.363555 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:00:59.363568 | orchestrator | 2025-09-02 01:00:59.363629 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:00:59.363694 | orchestrator | Tuesday 02 September 2025 00:58:46 +0000 (0:00:00.317) 0:00:00.496 ***** 2025-09-02 01:00:59.363705 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-02 01:00:59.363716 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-02 01:00:59.363727 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-02 01:00:59.363760 | orchestrator | 2025-09-02 01:00:59.363772 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-02 01:00:59.363783 | orchestrator | 2025-09-02 01:00:59.363794 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-02 01:00:59.363805 | orchestrator | Tuesday 02 September 2025 00:58:47 +0000 (0:00:00.710) 0:00:01.206 ***** 2025-09-02 01:00:59.363815 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:00:59.363826 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:00:59.363837 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:00:59.363848 | orchestrator | 2025-09-02 01:00:59.363859 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:00:59.363870 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:00:59.363883 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:00:59.363894 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:00:59.363905 | orchestrator | 2025-09-02 01:00:59.363916 | orchestrator | 2025-09-02 01:00:59.363926 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:00:59.363938 | orchestrator | Tuesday 02 September 2025 00:58:48 +0000 (0:00:00.812) 0:00:02.020 ***** 2025-09-02 01:00:59.363949 | orchestrator | =============================================================================== 2025-09-02 01:00:59.363959 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.81s 2025-09-02 01:00:59.363970 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-09-02 01:00:59.363981 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-02 01:00:59.363992 | orchestrator | 2025-09-02 01:00:59.364003 | orchestrator | 2025-09-02 01:00:59.364014 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-02 01:00:59.364025 | orchestrator | 2025-09-02 01:00:59.364048 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-02 01:00:59.364059 | orchestrator | Tuesday 02 September 2025 00:58:46 +0000 (0:00:00.266) 0:00:00.266 ***** 2025-09-02 01:00:59.364070 | orchestrator | changed: [testbed-manager] 2025-09-02 01:00:59.364082 | orchestrator | 2025-09-02 01:00:59.364093 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-02 01:00:59.364104 | orchestrator | Tuesday 02 September 2025 00:58:48 +0000 (0:00:01.498) 0:00:01.764 ***** 2025-09-02 01:00:59.364115 | orchestrator | changed: [testbed-manager] 2025-09-02 01:00:59.364126 | orchestrator | 2025-09-02 01:00:59.364137 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-02 01:00:59.364148 | orchestrator | Tuesday 02 September 2025 00:58:49 +0000 (0:00:01.106) 0:00:02.871 ***** 2025-09-02 01:00:59.364158 | orchestrator | changed: [testbed-manager] 2025-09-02 01:00:59.364169 | orchestrator | 2025-09-02 01:00:59.364180 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-02 01:00:59.364191 | orchestrator | Tuesday 02 September 2025 00:58:50 +0000 (0:00:01.158) 0:00:04.029 ***** 2025-09-02 01:00:59.364202 | orchestrator | changed: [testbed-manager] 2025-09-02 01:00:59.364213 | orchestrator | 2025-09-02 01:00:59.364224 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-02 01:00:59.364235 | orchestrator | Tuesday 02 September 2025 00:58:51 +0000 (0:00:01.246) 0:00:05.276 ***** 2025-09-02 01:00:59.364246 | orchestrator | changed: [testbed-manager] 2025-09-02 01:00:59.364257 | orchestrator | 2025-09-02 01:00:59.364268 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-02 01:00:59.364279 | orchestrator | Tuesday 02 September 2025 00:58:52 +0000 (0:00:01.210) 0:00:06.487 ***** 2025-09-02 01:00:59.364290 | orchestrator | changed: [testbed-manager] 2025-09-02 01:00:59.364301 | orchestrator | 2025-09-02 01:00:59.364329 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-02 01:00:59.364349 | orchestrator | Tuesday 02 September 2025 00:58:54 +0000 (0:00:01.256) 0:00:07.743 ***** 2025-09-02 01:00:59.364360 | orchestrator | changed: [testbed-manager] 2025-09-02 01:00:59.364372 | orchestrator | 2025-09-02 01:00:59.364383 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-02 01:00:59.364393 | orchestrator | Tuesday 02 September 2025 00:58:56 +0000 (0:00:02.086) 0:00:09.829 ***** 2025-09-02 01:00:59.364404 | orchestrator | changed: [testbed-manager] 2025-09-02 01:00:59.364415 | orchestrator | 2025-09-02 01:00:59.364426 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-02 01:00:59.364437 | orchestrator | Tuesday 02 September 2025 00:58:57 +0000 (0:00:01.174) 0:00:11.003 ***** 2025-09-02 01:00:59.364448 | orchestrator | changed: [testbed-manager] 2025-09-02 01:00:59.364459 | orchestrator | 2025-09-02 01:00:59.364470 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-02 01:00:59.364481 | orchestrator | Tuesday 02 September 2025 00:59:42 +0000 (0:00:45.651) 0:00:56.655 ***** 2025-09-02 01:00:59.364504 | orchestrator | skipping: [testbed-manager] 2025-09-02 01:00:59.364515 | orchestrator | 2025-09-02 01:00:59.364526 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-02 01:00:59.364537 | orchestrator | 2025-09-02 01:00:59.364548 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-02 01:00:59.364559 | orchestrator | Tuesday 02 September 2025 00:59:43 +0000 (0:00:00.185) 0:00:56.840 ***** 2025-09-02 01:00:59.364570 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:00:59.364581 | orchestrator | 2025-09-02 01:00:59.364610 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-02 01:00:59.364622 | orchestrator | 2025-09-02 01:00:59.364632 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-02 01:00:59.364643 | orchestrator | Tuesday 02 September 2025 00:59:54 +0000 (0:00:11.745) 0:01:08.585 ***** 2025-09-02 01:00:59.364654 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:00:59.364664 | orchestrator | 2025-09-02 01:00:59.364675 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-02 01:00:59.364686 | orchestrator | 2025-09-02 01:00:59.364697 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-02 01:00:59.364708 | orchestrator | Tuesday 02 September 2025 01:00:06 +0000 (0:00:11.286) 0:01:19.872 ***** 2025-09-02 01:00:59.364718 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:00:59.364729 | orchestrator | 2025-09-02 01:00:59.364740 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:00:59.364751 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-02 01:00:59.364763 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:00:59.364774 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:00:59.364785 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:00:59.364796 | orchestrator | 2025-09-02 01:00:59.364807 | orchestrator | 2025-09-02 01:00:59.364818 | orchestrator | 2025-09-02 01:00:59.364829 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:00:59.364839 | orchestrator | Tuesday 02 September 2025 01:00:07 +0000 (0:00:01.154) 0:01:21.026 ***** 2025-09-02 01:00:59.364850 | orchestrator | =============================================================================== 2025-09-02 01:00:59.364861 | orchestrator | Create admin user ------------------------------------------------------ 45.65s 2025-09-02 01:00:59.364872 | orchestrator | Restart ceph manager service ------------------------------------------- 24.19s 2025-09-02 01:00:59.364883 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2025-09-02 01:00:59.364901 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.50s 2025-09-02 01:00:59.364930 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.26s 2025-09-02 01:00:59.364942 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.25s 2025-09-02 01:00:59.364953 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.21s 2025-09-02 01:00:59.364964 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.17s 2025-09-02 01:00:59.364975 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.16s 2025-09-02 01:00:59.364986 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.11s 2025-09-02 01:00:59.364997 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2025-09-02 01:00:59.365008 | orchestrator | 2025-09-02 01:00:59.365029 | orchestrator | 2025-09-02 01:00:59.365040 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:00:59.365052 | orchestrator | 2025-09-02 01:00:59.365064 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:00:59.365075 | orchestrator | Tuesday 02 September 2025 00:58:55 +0000 (0:00:00.493) 0:00:00.493 ***** 2025-09-02 01:00:59.365087 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:00:59.365098 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:00:59.365110 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:00:59.365122 | orchestrator | 2025-09-02 01:00:59.365133 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:00:59.365145 | orchestrator | Tuesday 02 September 2025 00:58:55 +0000 (0:00:00.359) 0:00:00.853 ***** 2025-09-02 01:00:59.365157 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-02 01:00:59.365168 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-02 01:00:59.365180 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-02 01:00:59.365191 | orchestrator | 2025-09-02 01:00:59.365203 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-02 01:00:59.365214 | orchestrator | 2025-09-02 01:00:59.365226 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-02 01:00:59.365238 | orchestrator | Tuesday 02 September 2025 00:58:56 +0000 (0:00:00.591) 0:00:01.444 ***** 2025-09-02 01:00:59.365249 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:00:59.365261 | orchestrator | 2025-09-02 01:00:59.365273 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-02 01:00:59.365284 | orchestrator | Tuesday 02 September 2025 00:58:56 +0000 (0:00:00.891) 0:00:02.335 ***** 2025-09-02 01:00:59.365300 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-02 01:00:59.365319 | orchestrator | 2025-09-02 01:00:59.365339 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-02 01:00:59.365370 | orchestrator | Tuesday 02 September 2025 00:59:00 +0000 (0:00:03.743) 0:00:06.079 ***** 2025-09-02 01:00:59.365388 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-02 01:00:59.365407 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-02 01:00:59.365424 | orchestrator | 2025-09-02 01:00:59.365441 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-02 01:00:59.365461 | orchestrator | Tuesday 02 September 2025 00:59:08 +0000 (0:00:07.541) 0:00:13.621 ***** 2025-09-02 01:00:59.365479 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-02 01:00:59.365500 | orchestrator | 2025-09-02 01:00:59.365519 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-02 01:00:59.365538 | orchestrator | Tuesday 02 September 2025 00:59:11 +0000 (0:00:03.074) 0:00:16.696 ***** 2025-09-02 01:00:59.365555 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:00:59.365575 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-02 01:00:59.365617 | orchestrator | 2025-09-02 01:00:59.365629 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-02 01:00:59.365640 | orchestrator | Tuesday 02 September 2025 00:59:14 +0000 (0:00:03.467) 0:00:20.164 ***** 2025-09-02 01:00:59.365651 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-02 01:00:59.365662 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-02 01:00:59.365673 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-02 01:00:59.365683 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-02 01:00:59.365694 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-02 01:00:59.365705 | orchestrator | 2025-09-02 01:00:59.365716 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-02 01:00:59.365727 | orchestrator | Tuesday 02 September 2025 00:59:30 +0000 (0:00:15.442) 0:00:35.606 ***** 2025-09-02 01:00:59.365737 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-02 01:00:59.365748 | orchestrator | 2025-09-02 01:00:59.365759 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-02 01:00:59.365769 | orchestrator | Tuesday 02 September 2025 00:59:34 +0000 (0:00:04.653) 0:00:40.259 ***** 2025-09-02 01:00:59.365789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.365804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.365824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.365837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.365855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.365872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.365884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.365896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.365907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.365925 | orchestrator | 2025-09-02 01:00:59.365936 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-02 01:00:59.365953 | orchestrator | Tuesday 02 September 2025 00:59:37 +0000 (0:00:02.432) 0:00:42.692 ***** 2025-09-02 01:00:59.365964 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-02 01:00:59.365975 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-02 01:00:59.365986 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-02 01:00:59.365997 | orchestrator | 2025-09-02 01:00:59.366007 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-02 01:00:59.366062 | orchestrator | Tuesday 02 September 2025 00:59:39 +0000 (0:00:02.033) 0:00:44.730 ***** 2025-09-02 01:00:59.366077 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:00:59.366088 | orchestrator | 2025-09-02 01:00:59.366098 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-02 01:00:59.366109 | orchestrator | Tuesday 02 September 2025 00:59:39 +0000 (0:00:00.263) 0:00:44.993 ***** 2025-09-02 01:00:59.366120 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:00:59.366131 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:00:59.366142 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:00:59.366153 | orchestrator | 2025-09-02 01:00:59.366164 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-02 01:00:59.366175 | orchestrator | Tuesday 02 September 2025 00:59:40 +0000 (0:00:00.929) 0:00:45.923 ***** 2025-09-02 01:00:59.366186 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:00:59.366197 | orchestrator | 2025-09-02 01:00:59.366207 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-02 01:00:59.366218 | orchestrator | Tuesday 02 September 2025 00:59:41 +0000 (0:00:00.999) 0:00:46.923 ***** 2025-09-02 01:00:59.366230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.366247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.366259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.366286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366367 | orchestrator | 2025-09-02 01:00:59.366378 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-02 01:00:59.366389 | orchestrator | Tuesday 02 September 2025 00:59:45 +0000 (0:00:03.643) 0:00:50.567 ***** 2025-09-02 01:00:59.366408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 01:00:59.366421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366444 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:00:59.366466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 01:00:59.366478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366506 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:00:59.366524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 01:00:59.366536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366558 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:00:59.366569 | orchestrator | 2025-09-02 01:00:59.366631 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-02 01:00:59.366645 | orchestrator | Tuesday 02 September 2025 00:59:47 +0000 (0:00:02.305) 0:00:52.872 ***** 2025-09-02 01:00:59.366657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 01:00:59.366676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366707 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:00:59.366719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 01:00:59.366730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366764 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:00:59.366776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 01:00:59.366793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.366816 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:00:59.366827 | orchestrator | 2025-09-02 01:00:59.366839 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-02 01:00:59.366850 | orchestrator | Tuesday 02 September 2025 00:59:49 +0000 (0:00:02.165) 0:00:55.037 ***** 2025-09-02 01:00:59.366861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.366878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.366896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.366915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.366994 | orchestrator | 2025-09-02 01:00:59.367005 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-02 01:00:59.367016 | orchestrator | Tuesday 02 September 2025 00:59:53 +0000 (0:00:03.507) 0:00:58.545 ***** 2025-09-02 01:00:59.367028 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:00:59.367039 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:00:59.367050 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:00:59.367061 | orchestrator | 2025-09-02 01:00:59.367072 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-02 01:00:59.367083 | orchestrator | Tuesday 02 September 2025 00:59:55 +0000 (0:00:02.602) 0:01:01.147 ***** 2025-09-02 01:00:59.367094 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 01:00:59.367105 | orchestrator | 2025-09-02 01:00:59.367115 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-02 01:00:59.367126 | orchestrator | Tuesday 02 September 2025 00:59:56 +0000 (0:00:01.130) 0:01:02.277 ***** 2025-09-02 01:00:59.367137 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:00:59.367148 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:00:59.367158 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:00:59.367168 | orchestrator | 2025-09-02 01:00:59.367182 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-02 01:00:59.367192 | orchestrator | Tuesday 02 September 2025 00:59:57 +0000 (0:00:00.672) 0:01:02.950 ***** 2025-09-02 01:00:59.367203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.367213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.367233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.367243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367320 | orchestrator | 2025-09-02 01:00:59.367330 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-02 01:00:59.367340 | orchestrator | Tuesday 02 September 2025 01:00:08 +0000 (0:00:11.054) 0:01:14.004 ***** 2025-09-02 01:00:59.367350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 01:00:59.367366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.367377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.367387 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:00:59.367397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 01:00:59.367419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.367430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.367440 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:00:59.367450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-02 01:00:59.367466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.367476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:00:59.367491 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:00:59.367501 | orchestrator | 2025-09-02 01:00:59.367511 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-02 01:00:59.367521 | orchestrator | Tuesday 02 September 2025 01:00:09 +0000 (0:00:00.812) 0:01:14.817 ***** 2025-09-02 01:00:59.367535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.367546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.367557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-02 01:00:59.367572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:00:59.367657 | orchestrator | 2025-09-02 01:00:59.367667 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-02 01:00:59.367677 | orchestrator | Tuesday 02 September 2025 01:00:13 +0000 (0:00:04.087) 0:01:18.904 ***** 2025-09-02 01:00:59.367687 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:00:59.367696 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:00:59.367706 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:00:59.367716 | orchestrator | 2025-09-02 01:00:59.367725 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-02 01:00:59.367740 | orchestrator | Tuesday 02 September 2025 01:00:13 +0000 (0:00:00.367) 0:01:19.272 ***** 2025-09-02 01:00:59.367756 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:00:59.367765 | orchestrator | 2025-09-02 01:00:59.367775 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-02 01:00:59.367785 | orchestrator | Tuesday 02 September 2025 01:00:16 +0000 (0:00:02.361) 0:01:21.633 ***** 2025-09-02 01:00:59.367794 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:00:59.367804 | orchestrator | 2025-09-02 01:00:59.367813 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-02 01:00:59.367823 | orchestrator | Tuesday 02 September 2025 01:00:18 +0000 (0:00:02.282) 0:01:23.915 ***** 2025-09-02 01:00:59.367833 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:00:59.367842 | orchestrator | 2025-09-02 01:00:59.367852 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-02 01:00:59.367861 | orchestrator | Tuesday 02 September 2025 01:00:31 +0000 (0:00:12.616) 0:01:36.532 ***** 2025-09-02 01:00:59.367871 | orchestrator | 2025-09-02 01:00:59.367880 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-02 01:00:59.367890 | orchestrator | Tuesday 02 September 2025 01:00:31 +0000 (0:00:00.231) 0:01:36.764 ***** 2025-09-02 01:00:59.367900 | orchestrator | 2025-09-02 01:00:59.367909 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-02 01:00:59.367919 | orchestrator | Tuesday 02 September 2025 01:00:31 +0000 (0:00:00.124) 0:01:36.888 ***** 2025-09-02 01:00:59.367928 | orchestrator | 2025-09-02 01:00:59.367938 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-02 01:00:59.367947 | orchestrator | Tuesday 02 September 2025 01:00:31 +0000 (0:00:00.107) 0:01:36.996 ***** 2025-09-02 01:00:59.367957 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:00:59.367967 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:00:59.367976 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:00:59.367986 | orchestrator | 2025-09-02 01:00:59.367995 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-02 01:00:59.368005 | orchestrator | Tuesday 02 September 2025 01:00:39 +0000 (0:00:07.385) 0:01:44.382 ***** 2025-09-02 01:00:59.368014 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:00:59.368024 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:00:59.368033 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:00:59.368043 | orchestrator | 2025-09-02 01:00:59.368052 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-02 01:00:59.368062 | orchestrator | Tuesday 02 September 2025 01:00:50 +0000 (0:00:11.497) 0:01:55.879 ***** 2025-09-02 01:00:59.368072 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:00:59.368081 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:00:59.368091 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:00:59.368100 | orchestrator | 2025-09-02 01:00:59.368110 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:00:59.368119 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-02 01:00:59.368129 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 01:00:59.368143 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 01:00:59.368153 | orchestrator | 2025-09-02 01:00:59.368163 | orchestrator | 2025-09-02 01:00:59.368172 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:00:59.368182 | orchestrator | Tuesday 02 September 2025 01:00:57 +0000 (0:00:07.060) 0:02:02.940 ***** 2025-09-02 01:00:59.368192 | orchestrator | =============================================================================== 2025-09-02 01:00:59.368202 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.44s 2025-09-02 01:00:59.368211 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.62s 2025-09-02 01:00:59.368226 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.50s 2025-09-02 01:00:59.368236 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.05s 2025-09-02 01:00:59.368245 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.54s 2025-09-02 01:00:59.368255 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.38s 2025-09-02 01:00:59.368265 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.06s 2025-09-02 01:00:59.368274 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.65s 2025-09-02 01:00:59.368284 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.09s 2025-09-02 01:00:59.368293 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.74s 2025-09-02 01:00:59.368303 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.64s 2025-09-02 01:00:59.368313 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.51s 2025-09-02 01:00:59.368322 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.47s 2025-09-02 01:00:59.368332 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.07s 2025-09-02 01:00:59.368341 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.60s 2025-09-02 01:00:59.368351 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.43s 2025-09-02 01:00:59.368360 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.36s 2025-09-02 01:00:59.368370 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.31s 2025-09-02 01:00:59.368380 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.28s 2025-09-02 01:00:59.368394 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.17s 2025-09-02 01:00:59.368404 | orchestrator | 2025-09-02 01:00:59 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:00:59.368414 | orchestrator | 2025-09-02 01:00:59 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:00:59.368423 | orchestrator | 2025-09-02 01:00:59 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:00:59.368433 | orchestrator | 2025-09-02 01:00:59 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:02.391894 | orchestrator | 2025-09-02 01:01:02 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:02.392092 | orchestrator | 2025-09-02 01:01:02 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:02.393325 | orchestrator | 2025-09-02 01:01:02 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:02.393903 | orchestrator | 2025-09-02 01:01:02 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:02.394087 | orchestrator | 2025-09-02 01:01:02 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:05.425573 | orchestrator | 2025-09-02 01:01:05 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:05.425956 | orchestrator | 2025-09-02 01:01:05 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:05.426535 | orchestrator | 2025-09-02 01:01:05 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:05.427399 | orchestrator | 2025-09-02 01:01:05 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:05.427551 | orchestrator | 2025-09-02 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:08.461185 | orchestrator | 2025-09-02 01:01:08 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:08.461809 | orchestrator | 2025-09-02 01:01:08 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:08.462386 | orchestrator | 2025-09-02 01:01:08 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:08.464284 | orchestrator | 2025-09-02 01:01:08 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:08.464318 | orchestrator | 2025-09-02 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:11.488455 | orchestrator | 2025-09-02 01:01:11 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:11.489433 | orchestrator | 2025-09-02 01:01:11 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:11.490182 | orchestrator | 2025-09-02 01:01:11 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:11.490970 | orchestrator | 2025-09-02 01:01:11 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:11.490994 | orchestrator | 2025-09-02 01:01:11 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:14.526159 | orchestrator | 2025-09-02 01:01:14 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:14.526995 | orchestrator | 2025-09-02 01:01:14 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:14.527950 | orchestrator | 2025-09-02 01:01:14 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:14.529423 | orchestrator | 2025-09-02 01:01:14 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:14.529667 | orchestrator | 2025-09-02 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:17.567569 | orchestrator | 2025-09-02 01:01:17 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:17.567743 | orchestrator | 2025-09-02 01:01:17 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:17.569376 | orchestrator | 2025-09-02 01:01:17 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:17.570974 | orchestrator | 2025-09-02 01:01:17 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:17.570997 | orchestrator | 2025-09-02 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:20.596231 | orchestrator | 2025-09-02 01:01:20 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:20.597512 | orchestrator | 2025-09-02 01:01:20 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:20.597545 | orchestrator | 2025-09-02 01:01:20 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:20.597557 | orchestrator | 2025-09-02 01:01:20 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:20.597569 | orchestrator | 2025-09-02 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:23.623161 | orchestrator | 2025-09-02 01:01:23 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:23.627038 | orchestrator | 2025-09-02 01:01:23 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:23.628549 | orchestrator | 2025-09-02 01:01:23 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:23.631246 | orchestrator | 2025-09-02 01:01:23 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:23.631698 | orchestrator | 2025-09-02 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:26.666787 | orchestrator | 2025-09-02 01:01:26 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:26.668442 | orchestrator | 2025-09-02 01:01:26 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:26.669312 | orchestrator | 2025-09-02 01:01:26 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:26.670224 | orchestrator | 2025-09-02 01:01:26 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:26.670248 | orchestrator | 2025-09-02 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:29.701965 | orchestrator | 2025-09-02 01:01:29 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:29.703236 | orchestrator | 2025-09-02 01:01:29 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:29.703274 | orchestrator | 2025-09-02 01:01:29 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:29.703286 | orchestrator | 2025-09-02 01:01:29 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:29.703318 | orchestrator | 2025-09-02 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:32.725187 | orchestrator | 2025-09-02 01:01:32 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:32.725295 | orchestrator | 2025-09-02 01:01:32 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:32.725687 | orchestrator | 2025-09-02 01:01:32 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:32.726165 | orchestrator | 2025-09-02 01:01:32 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:32.726187 | orchestrator | 2025-09-02 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:35.749569 | orchestrator | 2025-09-02 01:01:35 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:35.752194 | orchestrator | 2025-09-02 01:01:35 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:35.752232 | orchestrator | 2025-09-02 01:01:35 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:35.752786 | orchestrator | 2025-09-02 01:01:35 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:35.752808 | orchestrator | 2025-09-02 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:38.782692 | orchestrator | 2025-09-02 01:01:38 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:38.783647 | orchestrator | 2025-09-02 01:01:38 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:38.784486 | orchestrator | 2025-09-02 01:01:38 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:38.785978 | orchestrator | 2025-09-02 01:01:38 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:38.786001 | orchestrator | 2025-09-02 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:41.872482 | orchestrator | 2025-09-02 01:01:41 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:41.873313 | orchestrator | 2025-09-02 01:01:41 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:41.875707 | orchestrator | 2025-09-02 01:01:41 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:41.877848 | orchestrator | 2025-09-02 01:01:41 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:41.878149 | orchestrator | 2025-09-02 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:44.924889 | orchestrator | 2025-09-02 01:01:44 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:44.927364 | orchestrator | 2025-09-02 01:01:44 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:44.929909 | orchestrator | 2025-09-02 01:01:44 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:44.931840 | orchestrator | 2025-09-02 01:01:44 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:44.932206 | orchestrator | 2025-09-02 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:47.973563 | orchestrator | 2025-09-02 01:01:47 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:47.974539 | orchestrator | 2025-09-02 01:01:47 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:47.976030 | orchestrator | 2025-09-02 01:01:47 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:47.977424 | orchestrator | 2025-09-02 01:01:47 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:47.977448 | orchestrator | 2025-09-02 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:51.030373 | orchestrator | 2025-09-02 01:01:51 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:51.033017 | orchestrator | 2025-09-02 01:01:51 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:51.035736 | orchestrator | 2025-09-02 01:01:51 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:51.040063 | orchestrator | 2025-09-02 01:01:51 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:51.040374 | orchestrator | 2025-09-02 01:01:51 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:54.081516 | orchestrator | 2025-09-02 01:01:54 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:54.083662 | orchestrator | 2025-09-02 01:01:54 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:54.086547 | orchestrator | 2025-09-02 01:01:54 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:54.089156 | orchestrator | 2025-09-02 01:01:54 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:54.089254 | orchestrator | 2025-09-02 01:01:54 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:01:57.147897 | orchestrator | 2025-09-02 01:01:57 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:01:57.148005 | orchestrator | 2025-09-02 01:01:57 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:01:57.148895 | orchestrator | 2025-09-02 01:01:57 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:01:57.150097 | orchestrator | 2025-09-02 01:01:57 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:01:57.150130 | orchestrator | 2025-09-02 01:01:57 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:00.224988 | orchestrator | 2025-09-02 01:02:00 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:02:00.227522 | orchestrator | 2025-09-02 01:02:00 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:00.229798 | orchestrator | 2025-09-02 01:02:00 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:02:00.232472 | orchestrator | 2025-09-02 01:02:00 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:00.232509 | orchestrator | 2025-09-02 01:02:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:03.282911 | orchestrator | 2025-09-02 01:02:03 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state STARTED 2025-09-02 01:02:03.285837 | orchestrator | 2025-09-02 01:02:03 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:03.288279 | orchestrator | 2025-09-02 01:02:03 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:02:03.290264 | orchestrator | 2025-09-02 01:02:03 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:03.290288 | orchestrator | 2025-09-02 01:02:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:06.341738 | orchestrator | 2025-09-02 01:02:06 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:06.345396 | orchestrator | 2025-09-02 01:02:06 | INFO  | Task 5801c833-b161-4a0a-9a13-7aec99962417 is in state SUCCESS 2025-09-02 01:02:06.348209 | orchestrator | 2025-09-02 01:02:06.348249 | orchestrator | 2025-09-02 01:02:06.348263 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:02:06.348275 | orchestrator | 2025-09-02 01:02:06.348286 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:02:06.348297 | orchestrator | Tuesday 02 September 2025 00:58:55 +0000 (0:00:00.292) 0:00:00.292 ***** 2025-09-02 01:02:06.348309 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:02:06.348321 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:02:06.348331 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:02:06.348386 | orchestrator | 2025-09-02 01:02:06.348484 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:02:06.348496 | orchestrator | Tuesday 02 September 2025 00:58:55 +0000 (0:00:00.398) 0:00:00.690 ***** 2025-09-02 01:02:06.348508 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-02 01:02:06.348545 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-02 01:02:06.348557 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-02 01:02:06.348568 | orchestrator | 2025-09-02 01:02:06.348598 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-02 01:02:06.348609 | orchestrator | 2025-09-02 01:02:06.348620 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-02 01:02:06.348631 | orchestrator | Tuesday 02 September 2025 00:58:56 +0000 (0:00:00.657) 0:00:01.348 ***** 2025-09-02 01:02:06.348642 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:02:06.348657 | orchestrator | 2025-09-02 01:02:06.348670 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-02 01:02:06.348682 | orchestrator | Tuesday 02 September 2025 00:58:57 +0000 (0:00:00.718) 0:00:02.066 ***** 2025-09-02 01:02:06.348695 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-02 01:02:06.348708 | orchestrator | 2025-09-02 01:02:06.348721 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-02 01:02:06.348735 | orchestrator | Tuesday 02 September 2025 00:59:01 +0000 (0:00:03.948) 0:00:06.014 ***** 2025-09-02 01:02:06.348748 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-02 01:02:06.348843 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-02 01:02:06.348942 | orchestrator | 2025-09-02 01:02:06.348958 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-02 01:02:06.348973 | orchestrator | Tuesday 02 September 2025 00:59:07 +0000 (0:00:06.833) 0:00:12.848 ***** 2025-09-02 01:02:06.349050 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-02 01:02:06.349063 | orchestrator | 2025-09-02 01:02:06.349074 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-02 01:02:06.349085 | orchestrator | Tuesday 02 September 2025 00:59:10 +0000 (0:00:03.084) 0:00:15.932 ***** 2025-09-02 01:02:06.349097 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:02:06.349108 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-02 01:02:06.349143 | orchestrator | 2025-09-02 01:02:06.349155 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-02 01:02:06.349166 | orchestrator | Tuesday 02 September 2025 00:59:14 +0000 (0:00:03.787) 0:00:19.720 ***** 2025-09-02 01:02:06.349177 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-02 01:02:06.349188 | orchestrator | 2025-09-02 01:02:06.349199 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-02 01:02:06.349210 | orchestrator | Tuesday 02 September 2025 00:59:18 +0000 (0:00:03.355) 0:00:23.075 ***** 2025-09-02 01:02:06.349221 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-02 01:02:06.349231 | orchestrator | 2025-09-02 01:02:06.349242 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-02 01:02:06.349253 | orchestrator | Tuesday 02 September 2025 00:59:23 +0000 (0:00:04.867) 0:00:27.943 ***** 2025-09-02 01:02:06.349267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.349298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.349311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.349350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349602 | orchestrator | 2025-09-02 01:02:06.349615 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-02 01:02:06.349626 | orchestrator | Tuesday 02 September 2025 00:59:26 +0000 (0:00:03.848) 0:00:31.791 ***** 2025-09-02 01:02:06.349637 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:06.349648 | orchestrator | 2025-09-02 01:02:06.349659 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-02 01:02:06.349670 | orchestrator | Tuesday 02 September 2025 00:59:26 +0000 (0:00:00.119) 0:00:31.911 ***** 2025-09-02 01:02:06.349680 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:06.349691 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:06.349702 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:06.349713 | orchestrator | 2025-09-02 01:02:06.349724 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-02 01:02:06.349735 | orchestrator | Tuesday 02 September 2025 00:59:27 +0000 (0:00:00.233) 0:00:32.145 ***** 2025-09-02 01:02:06.349746 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:02:06.349757 | orchestrator | 2025-09-02 01:02:06.349767 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-02 01:02:06.349778 | orchestrator | Tuesday 02 September 2025 00:59:27 +0000 (0:00:00.554) 0:00:32.699 ***** 2025-09-02 01:02:06.349790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.349810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.349828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.349844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.349987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.350004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.350060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.350081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.350093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.350104 | orchestrator | 2025-09-02 01:02:06.350115 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-02 01:02:06.350126 | orchestrator | Tuesday 02 September 2025 00:59:33 +0000 (0:00:05.834) 0:00:38.534 ***** 2025-09-02 01:02:06.350138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.350149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 01:02:06.350178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.350191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.350236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.350249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.350260 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:06.350272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.350290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 01:02:06.350794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.350866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.350880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.350905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.350917 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:06.350929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.350959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 01:02:06.350985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.350996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351031 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:06.351042 | orchestrator | 2025-09-02 01:02:06.351052 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-02 01:02:06.351063 | orchestrator | Tuesday 02 September 2025 00:59:35 +0000 (0:00:01.626) 0:00:40.160 ***** 2025-09-02 01:02:06.351073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.351090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 01:02:06.351105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351151 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:06.351161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.351176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 01:02:06.351192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351237 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:06.351247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.351263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 01:02:06.351273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.351324 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:06.351334 | orchestrator | 2025-09-02 01:02:06.351345 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-02 01:02:06.351358 | orchestrator | Tuesday 02 September 2025 00:59:37 +0000 (0:00:02.765) 0:00:42.925 ***** 2025-09-02 01:02:06.351370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.351389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.351408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.351421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351650 | orchestrator | 2025-09-02 01:02:06.351662 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-02 01:02:06.351673 | orchestrator | Tuesday 02 September 2025 00:59:44 +0000 (0:00:06.982) 0:00:49.907 ***** 2025-09-02 01:02:06.351694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.351713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.351723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.351739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.351922 | orchestrator | 2025-09-02 01:02:06.351932 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-02 01:02:06.351942 | orchestrator | Tuesday 02 September 2025 01:00:08 +0000 (0:00:23.447) 0:01:13.355 ***** 2025-09-02 01:02:06.351956 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-02 01:02:06.351966 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-02 01:02:06.351976 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-02 01:02:06.351986 | orchestrator | 2025-09-02 01:02:06.351995 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-02 01:02:06.352005 | orchestrator | Tuesday 02 September 2025 01:00:15 +0000 (0:00:06.790) 0:01:20.146 ***** 2025-09-02 01:02:06.352014 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-02 01:02:06.352024 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-02 01:02:06.352038 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-02 01:02:06.352048 | orchestrator | 2025-09-02 01:02:06.352057 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-02 01:02:06.352067 | orchestrator | Tuesday 02 September 2025 01:00:18 +0000 (0:00:02.806) 0:01:22.953 ***** 2025-09-02 01:02:06.352077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.352088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.352104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.352115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352304 | orchestrator | 2025-09-02 01:02:06.352314 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-02 01:02:06.352323 | orchestrator | Tuesday 02 September 2025 01:00:21 +0000 (0:00:03.907) 0:01:26.861 ***** 2025-09-02 01:02:06.352338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.352349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.352359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.352375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.352732 | orchestrator | 2025-09-02 01:02:06.352742 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-02 01:02:06.352752 | orchestrator | Tuesday 02 September 2025 01:00:24 +0000 (0:00:02.860) 0:01:29.721 ***** 2025-09-02 01:02:06.352762 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:06.352772 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:06.352782 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:06.352792 | orchestrator | 2025-09-02 01:02:06.352801 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-02 01:02:06.352811 | orchestrator | Tuesday 02 September 2025 01:00:25 +0000 (0:00:00.627) 0:01:30.349 ***** 2025-09-02 01:02:06.352827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.352837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 01:02:06.352848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352903 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:06.352918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.352928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 01:02:06.352938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.352990 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:06.353004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-02 01:02:06.353015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-02 01:02:06.353025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.353041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.353051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.353066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-02 01:02:06.353076 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:06.353086 | orchestrator | 2025-09-02 01:02:06.353096 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-02 01:02:06.353106 | orchestrator | Tuesday 02 September 2025 01:00:27 +0000 (0:00:02.584) 0:01:32.933 ***** 2025-09-02 01:02:06.353120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.353130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.353146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-02 01:02:06.353156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-02 01:02:06.353349 | orchestrator | 2025-09-02 01:02:06.353361 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-02 01:02:06.353372 | orchestrator | Tuesday 02 September 2025 01:00:32 +0000 (0:00:04.946) 0:01:37.879 ***** 2025-09-02 01:02:06.353383 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:06.353395 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:06.353407 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:06.353420 | orchestrator | 2025-09-02 01:02:06.353431 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-02 01:02:06.353442 | orchestrator | Tuesday 02 September 2025 01:00:33 +0000 (0:00:00.828) 0:01:38.707 ***** 2025-09-02 01:02:06.353454 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-02 01:02:06.353465 | orchestrator | 2025-09-02 01:02:06.353476 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-02 01:02:06.353488 | orchestrator | Tuesday 02 September 2025 01:00:36 +0000 (0:00:02.350) 0:01:41.057 ***** 2025-09-02 01:02:06.353499 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-02 01:02:06.353511 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-02 01:02:06.353522 | orchestrator | 2025-09-02 01:02:06.353533 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-02 01:02:06.353545 | orchestrator | Tuesday 02 September 2025 01:00:39 +0000 (0:00:02.893) 0:01:43.951 ***** 2025-09-02 01:02:06.353556 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:06.353568 | orchestrator | 2025-09-02 01:02:06.353623 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-02 01:02:06.353637 | orchestrator | Tuesday 02 September 2025 01:00:55 +0000 (0:00:16.535) 0:02:00.486 ***** 2025-09-02 01:02:06.353648 | orchestrator | 2025-09-02 01:02:06.353660 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-02 01:02:06.353679 | orchestrator | Tuesday 02 September 2025 01:00:56 +0000 (0:00:00.810) 0:02:01.296 ***** 2025-09-02 01:02:06.353690 | orchestrator | 2025-09-02 01:02:06.353703 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-02 01:02:06.353720 | orchestrator | Tuesday 02 September 2025 01:00:56 +0000 (0:00:00.112) 0:02:01.409 ***** 2025-09-02 01:02:06.353730 | orchestrator | 2025-09-02 01:02:06.353740 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-02 01:02:06.353748 | orchestrator | Tuesday 02 September 2025 01:00:56 +0000 (0:00:00.080) 0:02:01.489 ***** 2025-09-02 01:02:06.353756 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:06.353764 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:06.353773 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:06.353781 | orchestrator | 2025-09-02 01:02:06.353789 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-02 01:02:06.353797 | orchestrator | Tuesday 02 September 2025 01:01:13 +0000 (0:00:16.535) 0:02:18.025 ***** 2025-09-02 01:02:06.353805 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:06.353813 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:06.353821 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:06.353829 | orchestrator | 2025-09-02 01:02:06.353837 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-02 01:02:06.353845 | orchestrator | Tuesday 02 September 2025 01:01:23 +0000 (0:00:10.606) 0:02:28.632 ***** 2025-09-02 01:02:06.353853 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:06.353861 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:06.353869 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:06.353877 | orchestrator | 2025-09-02 01:02:06.353885 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-02 01:02:06.353893 | orchestrator | Tuesday 02 September 2025 01:01:30 +0000 (0:00:06.912) 0:02:35.545 ***** 2025-09-02 01:02:06.353901 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:06.353909 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:06.353917 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:06.353925 | orchestrator | 2025-09-02 01:02:06.353933 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-02 01:02:06.353941 | orchestrator | Tuesday 02 September 2025 01:01:38 +0000 (0:00:07.610) 0:02:43.155 ***** 2025-09-02 01:02:06.353949 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:06.353957 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:06.353964 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:06.353972 | orchestrator | 2025-09-02 01:02:06.353980 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-02 01:02:06.353988 | orchestrator | Tuesday 02 September 2025 01:01:44 +0000 (0:00:06.709) 0:02:49.865 ***** 2025-09-02 01:02:06.353996 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:06.354004 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:06.354012 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:06.354041 | orchestrator | 2025-09-02 01:02:06.354050 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-02 01:02:06.354058 | orchestrator | Tuesday 02 September 2025 01:01:55 +0000 (0:00:10.864) 0:03:00.730 ***** 2025-09-02 01:02:06.354068 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:06.354076 | orchestrator | 2025-09-02 01:02:06.354084 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:02:06.354111 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-02 01:02:06.354121 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 01:02:06.354129 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 01:02:06.354143 | orchestrator | 2025-09-02 01:02:06.354151 | orchestrator | 2025-09-02 01:02:06.354165 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:02:06.354173 | orchestrator | Tuesday 02 September 2025 01:02:02 +0000 (0:00:07.133) 0:03:07.863 ***** 2025-09-02 01:02:06.354181 | orchestrator | =============================================================================== 2025-09-02 01:02:06.354189 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.45s 2025-09-02 01:02:06.354197 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.54s 2025-09-02 01:02:06.354205 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.54s 2025-09-02 01:02:06.354213 | orchestrator | designate : Restart designate-worker container ------------------------- 10.86s 2025-09-02 01:02:06.354221 | orchestrator | designate : Restart designate-api container ---------------------------- 10.61s 2025-09-02 01:02:06.354229 | orchestrator | designate : Restart designate-producer container ------------------------ 7.61s 2025-09-02 01:02:06.354237 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.13s 2025-09-02 01:02:06.354245 | orchestrator | designate : Copying over config.json files for services ----------------- 6.98s 2025-09-02 01:02:06.354252 | orchestrator | designate : Restart designate-central container ------------------------- 6.91s 2025-09-02 01:02:06.354260 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.83s 2025-09-02 01:02:06.354268 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.79s 2025-09-02 01:02:06.354276 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.71s 2025-09-02 01:02:06.354284 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.83s 2025-09-02 01:02:06.354292 | orchestrator | designate : Check designate containers ---------------------------------- 4.95s 2025-09-02 01:02:06.354300 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.87s 2025-09-02 01:02:06.354308 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.95s 2025-09-02 01:02:06.354315 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.91s 2025-09-02 01:02:06.354327 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.85s 2025-09-02 01:02:06.354335 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.79s 2025-09-02 01:02:06.354343 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.36s 2025-09-02 01:02:06.354351 | orchestrator | 2025-09-02 01:02:06 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:06.354359 | orchestrator | 2025-09-02 01:02:06 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:02:06.354367 | orchestrator | 2025-09-02 01:02:06 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:06.354375 | orchestrator | 2025-09-02 01:02:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:09.404051 | orchestrator | 2025-09-02 01:02:09 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:09.404299 | orchestrator | 2025-09-02 01:02:09 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:09.404765 | orchestrator | 2025-09-02 01:02:09 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:02:09.405519 | orchestrator | 2025-09-02 01:02:09 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:09.405547 | orchestrator | 2025-09-02 01:02:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:12.454784 | orchestrator | 2025-09-02 01:02:12 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:12.455567 | orchestrator | 2025-09-02 01:02:12 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:12.457705 | orchestrator | 2025-09-02 01:02:12 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state STARTED 2025-09-02 01:02:12.459438 | orchestrator | 2025-09-02 01:02:12 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:12.459850 | orchestrator | 2025-09-02 01:02:12 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:15.510332 | orchestrator | 2025-09-02 01:02:15 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:15.510499 | orchestrator | 2025-09-02 01:02:15 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:15.512037 | orchestrator | 2025-09-02 01:02:15 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:15.515055 | orchestrator | 2025-09-02 01:02:15 | INFO  | Task 3432eb98-ab47-4383-8ad8-9e09a1c94766 is in state SUCCESS 2025-09-02 01:02:15.517519 | orchestrator | 2025-09-02 01:02:15.517549 | orchestrator | 2025-09-02 01:02:15.517561 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:02:15.517618 | orchestrator | 2025-09-02 01:02:15.517631 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:02:15.517643 | orchestrator | Tuesday 02 September 2025 00:58:46 +0000 (0:00:00.281) 0:00:00.281 ***** 2025-09-02 01:02:15.517654 | orchestrator | ok: [testbed-manager] 2025-09-02 01:02:15.517666 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:02:15.517677 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:02:15.517688 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:02:15.517699 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:02:15.517710 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:02:15.517720 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:02:15.517731 | orchestrator | 2025-09-02 01:02:15.517742 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:02:15.517753 | orchestrator | Tuesday 02 September 2025 00:58:47 +0000 (0:00:00.843) 0:00:01.125 ***** 2025-09-02 01:02:15.517765 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-02 01:02:15.517776 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-02 01:02:15.517786 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-02 01:02:15.517797 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-02 01:02:15.517808 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-02 01:02:15.517819 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-02 01:02:15.517829 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-02 01:02:15.517840 | orchestrator | 2025-09-02 01:02:15.517851 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-02 01:02:15.517861 | orchestrator | 2025-09-02 01:02:15.517872 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-02 01:02:15.517883 | orchestrator | Tuesday 02 September 2025 00:58:48 +0000 (0:00:00.923) 0:00:02.048 ***** 2025-09-02 01:02:15.517895 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 01:02:15.517907 | orchestrator | 2025-09-02 01:02:15.517918 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-02 01:02:15.517929 | orchestrator | Tuesday 02 September 2025 00:58:51 +0000 (0:00:02.889) 0:00:04.938 ***** 2025-09-02 01:02:15.517962 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-02 01:02:15.517999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518123 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518160 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518219 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518342 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-02 01:02:15.518367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518413 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518467 | orchestrator | 2025-09-02 01:02:15.518478 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-02 01:02:15.518490 | orchestrator | Tuesday 02 September 2025 00:58:55 +0000 (0:00:04.162) 0:00:09.101 ***** 2025-09-02 01:02:15.518501 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 01:02:15.518520 | orchestrator | 2025-09-02 01:02:15.518531 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-02 01:02:15.518542 | orchestrator | Tuesday 02 September 2025 00:58:57 +0000 (0:00:01.905) 0:00:11.006 ***** 2025-09-02 01:02:15.518558 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-02 01:02:15.518571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518712 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.518729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518795 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518812 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518893 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-02 01:02:15.518918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518958 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.518969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.518981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.519000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.519018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.519029 | orchestrator | 2025-09-02 01:02:15.519041 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-02 01:02:15.519052 | orchestrator | Tuesday 02 September 2025 00:59:03 +0000 (0:00:06.376) 0:00:17.383 ***** 2025-09-02 01:02:15.519064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519133 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-02 01:02:15.519151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519174 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519213 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519243 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-02 01:02:15.519264 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519275 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.519287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519298 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.519309 | orchestrator | skipping: [testbed-manager] 2025-09-02 01:02:15.519325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519378 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.519406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519441 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.519457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519492 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.519503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519552 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.519563 | orchestrator | 2025-09-02 01:02:15.519598 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-02 01:02:15.519610 | orchestrator | Tuesday 02 September 2025 00:59:05 +0000 (0:00:01.636) 0:00:19.019 ***** 2025-09-02 01:02:15.519622 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-02 01:02:15.519639 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519650 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519662 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-02 01:02:15.519680 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519789 | orchestrator | skipping: [testbed-manager] 2025-09-02 01:02:15.519801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519841 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.519852 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.519863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-02 01:02:15.519936 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.519953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.519965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.519987 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.519999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.520015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.520033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.520044 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.520056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-02 01:02:15.520067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.520085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-02 01:02:15.520097 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.520108 | orchestrator | 2025-09-02 01:02:15.520119 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-02 01:02:15.520130 | orchestrator | Tuesday 02 September 2025 00:59:07 +0000 (0:00:02.034) 0:00:21.053 ***** 2025-09-02 01:02:15.520141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.520153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.520169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.520187 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-02 01:02:15.520199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.520210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.520227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.520238 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520284 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.520296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520348 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520436 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-02 01:02:15.520448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.520494 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.520540 | orchestrator | 2025-09-02 01:02:15.520551 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-02 01:02:15.520562 | orchestrator | Tuesday 02 September 2025 00:59:12 +0000 (0:00:05.573) 0:00:26.626 ***** 2025-09-02 01:02:15.520573 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 01:02:15.520600 | orchestrator | 2025-09-02 01:02:15.520612 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-02 01:02:15.520628 | orchestrator | Tuesday 02 September 2025 00:59:13 +0000 (0:00:01.023) 0:00:27.650 ***** 2025-09-02 01:02:15.520640 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1851811, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.602497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520652 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1851811, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.602497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520671 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1851811, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.602497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520687 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1851811, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.602497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520699 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1851811, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.602497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.520710 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1851811, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.602497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520727 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1851821, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6076195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520739 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1851809, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520756 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1851821, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6076195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520768 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1851821, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6076195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520787 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1851811, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.602497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520799 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1851821, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6076195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520810 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1851821, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6076195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520827 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1851821, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6076195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520839 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1851817, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.605749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520856 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1851821, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6076195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.520868 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1851809, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520883 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1851809, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520895 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1851809, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520906 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1851807, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6000931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520923 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1851809, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520934 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1851817, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.605749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520952 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1851812, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6032283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520963 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1851809, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520979 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1851817, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.605749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.520990 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1851817, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.605749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521002 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1851807, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6000931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521013 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1851816, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521030 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1851817, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.605749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521048 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1851812, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6032283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521060 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1851807, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6000931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521075 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1851817, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.605749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521087 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1851807, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6000931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521098 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1851816, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521110 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1851812, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6032283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521133 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1851807, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6000931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521145 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1851813, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521157 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1851816, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521173 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1851812, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6032283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521185 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1851807, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6000931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521197 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1851813, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521208 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1851810, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521231 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1851813, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521243 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1851816, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521254 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1851812, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6032283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521270 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1851810, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521282 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1851809, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.521294 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1851813, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521305 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1851810, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521635 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1851812, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6032283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521653 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1851816, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521664 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851820, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6072547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521682 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851820, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6072547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521694 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851820, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6072547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521705 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1851810, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521717 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851805, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5992599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521748 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1851813, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521760 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1851816, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521771 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1851810, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521787 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851805, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5992599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521798 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851805, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5992599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521810 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851820, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6072547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521831 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1851827, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521848 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1851813, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521860 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1851827, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521871 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1851817, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.605749, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.521887 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1851827, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521899 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851820, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6072547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521910 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1851810, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521928 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1851819, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.606524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521945 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851805, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5992599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521957 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1851819, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.606524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521968 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1851819, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.606524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521984 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851820, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6072547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.521996 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851805, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5992599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522007 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851808, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6003594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522057 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1851827, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522080 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851808, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6003594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522091 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851808, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6003594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522103 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1851807, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6000931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522119 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851805, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5992599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522130 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1851827, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522148 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1851806, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.599612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522159 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1851806, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.599612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522177 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1851819, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.606524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522189 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1851806, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.599612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522200 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1851827, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522216 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1851819, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.606524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522228 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1851815, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522245 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851808, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6003594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522259 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1851815, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522278 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851808, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6003594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522292 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1851815, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522306 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1851812, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6032283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522319 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1851814, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6040454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522331 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1851819, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.606524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522350 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1851814, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6040454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522365 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1851826, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6088612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522377 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.522435 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1851806, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.599612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522450 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1851806, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.599612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522463 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851808, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6003594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522480 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1851826, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6088612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522504 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.522519 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1851814, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6040454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522533 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1851815, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522546 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1851815, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522564 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1851806, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.599612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522605 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1851814, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6040454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522618 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1851826, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6088612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522629 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.522646 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1851814, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6040454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522666 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1851815, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522678 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1851816, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522689 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1851826, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6088612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522701 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.522717 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1851826, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6088612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522729 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.522740 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1851814, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6040454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522751 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1851826, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6088612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-02 01:02:15.522773 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.522789 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1851813, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522801 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1851810, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6014972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522813 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851820, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6072547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522824 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851805, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5992599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522840 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1851827, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.60962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522852 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1851819, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.606524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522863 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1851808, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6003594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522886 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1851806, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.599612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522898 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1851815, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6044972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522909 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1851814, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6040454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522920 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1851826, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.6088612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-02 01:02:15.522931 | orchestrator | 2025-09-02 01:02:15.522942 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-02 01:02:15.522953 | orchestrator | Tuesday 02 September 2025 00:59:42 +0000 (0:00:28.140) 0:00:55.790 ***** 2025-09-02 01:02:15.522965 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 01:02:15.522975 | orchestrator | 2025-09-02 01:02:15.522992 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-02 01:02:15.523003 | orchestrator | Tuesday 02 September 2025 00:59:43 +0000 (0:00:01.057) 0:00:56.848 ***** 2025-09-02 01:02:15.523014 | orchestrator | [WARNING]: Skipped 2025-09-02 01:02:15.523026 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523037 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-02 01:02:15.523048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523059 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-02 01:02:15.523070 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 01:02:15.523081 | orchestrator | [WARNING]: Skipped 2025-09-02 01:02:15.523091 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523102 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-02 01:02:15.523113 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523130 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-02 01:02:15.523141 | orchestrator | [WARNING]: Skipped 2025-09-02 01:02:15.523152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523162 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-02 01:02:15.523173 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523184 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-02 01:02:15.523195 | orchestrator | [WARNING]: Skipped 2025-09-02 01:02:15.523206 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523216 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-02 01:02:15.523227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523238 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-02 01:02:15.523249 | orchestrator | [WARNING]: Skipped 2025-09-02 01:02:15.523259 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523270 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-02 01:02:15.523281 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523291 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-02 01:02:15.523302 | orchestrator | [WARNING]: Skipped 2025-09-02 01:02:15.523317 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523329 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-02 01:02:15.523339 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523350 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-02 01:02:15.523361 | orchestrator | [WARNING]: Skipped 2025-09-02 01:02:15.523372 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523382 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-02 01:02:15.523393 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-02 01:02:15.523403 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-02 01:02:15.523414 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 01:02:15.523425 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-02 01:02:15.523436 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-02 01:02:15.523446 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-02 01:02:15.523457 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-02 01:02:15.523467 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-02 01:02:15.523478 | orchestrator | 2025-09-02 01:02:15.523488 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-02 01:02:15.523499 | orchestrator | Tuesday 02 September 2025 00:59:45 +0000 (0:00:02.386) 0:00:59.234 ***** 2025-09-02 01:02:15.523510 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-02 01:02:15.523521 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.523532 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-02 01:02:15.523543 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.523554 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-02 01:02:15.523564 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.523592 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-02 01:02:15.523603 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.523614 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-02 01:02:15.523631 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.523642 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-02 01:02:15.523653 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.523664 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-02 01:02:15.523674 | orchestrator | 2025-09-02 01:02:15.523685 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-02 01:02:15.523696 | orchestrator | Tuesday 02 September 2025 01:00:13 +0000 (0:00:28.238) 0:01:27.473 ***** 2025-09-02 01:02:15.523707 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-02 01:02:15.523723 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.523734 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-02 01:02:15.523745 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-02 01:02:15.523755 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.523766 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.523777 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-02 01:02:15.523787 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.523798 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-02 01:02:15.523809 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.523819 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-02 01:02:15.523830 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.523841 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-02 01:02:15.523851 | orchestrator | 2025-09-02 01:02:15.523862 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-02 01:02:15.523873 | orchestrator | Tuesday 02 September 2025 01:00:18 +0000 (0:00:04.233) 0:01:31.706 ***** 2025-09-02 01:02:15.523884 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-02 01:02:15.523895 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-02 01:02:15.523906 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-02 01:02:15.523917 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.523928 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.523938 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.523949 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-02 01:02:15.523960 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.523976 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-02 01:02:15.523987 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-02 01:02:15.523998 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.524009 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-02 01:02:15.524020 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.524030 | orchestrator | 2025-09-02 01:02:15.524041 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-02 01:02:15.524052 | orchestrator | Tuesday 02 September 2025 01:00:20 +0000 (0:00:02.651) 0:01:34.358 ***** 2025-09-02 01:02:15.524070 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 01:02:15.524081 | orchestrator | 2025-09-02 01:02:15.524092 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-02 01:02:15.524102 | orchestrator | Tuesday 02 September 2025 01:00:22 +0000 (0:00:01.319) 0:01:35.678 ***** 2025-09-02 01:02:15.524113 | orchestrator | skipping: [testbed-manager] 2025-09-02 01:02:15.524124 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.524135 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.524145 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.524156 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.524167 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.524177 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.524188 | orchestrator | 2025-09-02 01:02:15.524199 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-02 01:02:15.524209 | orchestrator | Tuesday 02 September 2025 01:00:22 +0000 (0:00:00.931) 0:01:36.609 ***** 2025-09-02 01:02:15.524220 | orchestrator | skipping: [testbed-manager] 2025-09-02 01:02:15.524231 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.524241 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.524252 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:15.524263 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:15.524273 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.524284 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:15.524295 | orchestrator | 2025-09-02 01:02:15.524305 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-02 01:02:15.524316 | orchestrator | Tuesday 02 September 2025 01:00:25 +0000 (0:00:02.757) 0:01:39.367 ***** 2025-09-02 01:02:15.524327 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-02 01:02:15.524338 | orchestrator | skipping: [testbed-manager] 2025-09-02 01:02:15.524348 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-02 01:02:15.524359 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-02 01:02:15.524370 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.524380 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.524391 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-02 01:02:15.524402 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.524412 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-02 01:02:15.524429 | orchestrator | 2025-09-02 01:02:15 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:15.524441 | orchestrator | 2025-09-02 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:15.524451 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.524462 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-02 01:02:15.524473 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.524483 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-02 01:02:15.524493 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.524504 | orchestrator | 2025-09-02 01:02:15.524515 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-02 01:02:15.524525 | orchestrator | Tuesday 02 September 2025 01:00:28 +0000 (0:00:02.757) 0:01:42.125 ***** 2025-09-02 01:02:15.524536 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-02 01:02:15.524547 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-02 01:02:15.524558 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.524568 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.524636 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-02 01:02:15.524649 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-02 01:02:15.524660 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.524671 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-02 01:02:15.524681 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.524691 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-02 01:02:15.524700 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.524710 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-02 01:02:15.524719 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.524729 | orchestrator | 2025-09-02 01:02:15.524743 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-02 01:02:15.524753 | orchestrator | Tuesday 02 September 2025 01:00:31 +0000 (0:00:02.548) 0:01:44.673 ***** 2025-09-02 01:02:15.524763 | orchestrator | [WARNING]: Skipped 2025-09-02 01:02:15.524772 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-02 01:02:15.524782 | orchestrator | due to this access issue: 2025-09-02 01:02:15.524791 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-02 01:02:15.524801 | orchestrator | not a directory 2025-09-02 01:02:15.524811 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-02 01:02:15.524820 | orchestrator | 2025-09-02 01:02:15.524830 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-02 01:02:15.524839 | orchestrator | Tuesday 02 September 2025 01:00:32 +0000 (0:00:01.776) 0:01:46.449 ***** 2025-09-02 01:02:15.524849 | orchestrator | skipping: [testbed-manager] 2025-09-02 01:02:15.524858 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.524868 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.524877 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.524887 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.524896 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.524906 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.524915 | orchestrator | 2025-09-02 01:02:15.524925 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-02 01:02:15.524934 | orchestrator | Tuesday 02 September 2025 01:00:34 +0000 (0:00:01.775) 0:01:48.224 ***** 2025-09-02 01:02:15.524944 | orchestrator | skipping: [testbed-manager] 2025-09-02 01:02:15.524953 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:02:15.524963 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:02:15.524972 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:02:15.524982 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:02:15.524991 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:02:15.525000 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:02:15.525010 | orchestrator | 2025-09-02 01:02:15.525019 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-02 01:02:15.525029 | orchestrator | Tuesday 02 September 2025 01:00:35 +0000 (0:00:00.615) 0:01:48.840 ***** 2025-09-02 01:02:15.525039 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-02 01:02:15.525062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.525074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.525084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.525098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.525109 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.525119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.525139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-02 01:02:15.525197 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525289 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-02 01:02:15.525301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525352 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-02 01:02:15.525372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-02 01:02:15.525407 | orchestrator | 2025-09-02 01:02:15.525417 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-02 01:02:15.525426 | orchestrator | Tuesday 02 September 2025 01:00:40 +0000 (0:00:05.334) 0:01:54.175 ***** 2025-09-02 01:02:15.525436 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-02 01:02:15.525445 | orchestrator | skipping: [testbed-manager] 2025-09-02 01:02:15.525455 | orchestrator | 2025-09-02 01:02:15.525464 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-02 01:02:15.525480 | orchestrator | Tuesday 02 September 2025 01:00:42 +0000 (0:00:02.180) 0:01:56.355 ***** 2025-09-02 01:02:15.525490 | orchestrator | 2025-09-02 01:02:15.525499 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-02 01:02:15.525509 | orchestrator | Tuesday 02 September 2025 01:00:42 +0000 (0:00:00.108) 0:01:56.463 ***** 2025-09-02 01:02:15.525518 | orchestrator | 2025-09-02 01:02:15.525527 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-02 01:02:15.525537 | orchestrator | Tuesday 02 September 2025 01:00:42 +0000 (0:00:00.060) 0:01:56.524 ***** 2025-09-02 01:02:15.525546 | orchestrator | 2025-09-02 01:02:15.525556 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-02 01:02:15.525565 | orchestrator | Tuesday 02 September 2025 01:00:42 +0000 (0:00:00.126) 0:01:56.650 ***** 2025-09-02 01:02:15.525589 | orchestrator | 2025-09-02 01:02:15.525599 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-02 01:02:15.525609 | orchestrator | Tuesday 02 September 2025 01:00:43 +0000 (0:00:00.376) 0:01:57.027 ***** 2025-09-02 01:02:15.525619 | orchestrator | 2025-09-02 01:02:15.525628 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-02 01:02:15.525638 | orchestrator | Tuesday 02 September 2025 01:00:43 +0000 (0:00:00.215) 0:01:57.243 ***** 2025-09-02 01:02:15.525647 | orchestrator | 2025-09-02 01:02:15.525657 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-02 01:02:15.525666 | orchestrator | Tuesday 02 September 2025 01:00:43 +0000 (0:00:00.162) 0:01:57.405 ***** 2025-09-02 01:02:15.525675 | orchestrator | 2025-09-02 01:02:15.525685 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-02 01:02:15.525700 | orchestrator | Tuesday 02 September 2025 01:00:43 +0000 (0:00:00.107) 0:01:57.512 ***** 2025-09-02 01:02:15.525710 | orchestrator | changed: [testbed-manager] 2025-09-02 01:02:15.525720 | orchestrator | 2025-09-02 01:02:15.525729 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-02 01:02:15.525739 | orchestrator | Tuesday 02 September 2025 01:00:58 +0000 (0:00:14.526) 0:02:12.039 ***** 2025-09-02 01:02:15.525749 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:15.525758 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:02:15.525768 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:15.525777 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:02:15.525787 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:15.525796 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:02:15.525805 | orchestrator | changed: [testbed-manager] 2025-09-02 01:02:15.525815 | orchestrator | 2025-09-02 01:02:15.525824 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-02 01:02:15.525834 | orchestrator | Tuesday 02 September 2025 01:01:14 +0000 (0:00:16.477) 0:02:28.516 ***** 2025-09-02 01:02:15.525843 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:15.525853 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:15.525862 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:15.525872 | orchestrator | 2025-09-02 01:02:15.525881 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-02 01:02:15.525891 | orchestrator | Tuesday 02 September 2025 01:01:27 +0000 (0:00:12.389) 0:02:40.905 ***** 2025-09-02 01:02:15.525900 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:15.525909 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:15.525919 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:15.525928 | orchestrator | 2025-09-02 01:02:15.525938 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-02 01:02:15.525947 | orchestrator | Tuesday 02 September 2025 01:01:33 +0000 (0:00:05.904) 0:02:46.810 ***** 2025-09-02 01:02:15.525957 | orchestrator | changed: [testbed-manager] 2025-09-02 01:02:15.525966 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:15.525976 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:02:15.525985 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:15.526000 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:02:15.526009 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:02:15.526078 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:15.526091 | orchestrator | 2025-09-02 01:02:15.526101 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-02 01:02:15.526111 | orchestrator | Tuesday 02 September 2025 01:01:46 +0000 (0:00:13.804) 0:03:00.614 ***** 2025-09-02 01:02:15.526120 | orchestrator | changed: [testbed-manager] 2025-09-02 01:02:15.526130 | orchestrator | 2025-09-02 01:02:15.526139 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-02 01:02:15.526149 | orchestrator | Tuesday 02 September 2025 01:01:56 +0000 (0:00:09.274) 0:03:09.888 ***** 2025-09-02 01:02:15.526164 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:02:15.526174 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:02:15.526184 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:02:15.526194 | orchestrator | 2025-09-02 01:02:15.526203 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-02 01:02:15.526213 | orchestrator | Tuesday 02 September 2025 01:02:01 +0000 (0:00:05.186) 0:03:15.074 ***** 2025-09-02 01:02:15.526223 | orchestrator | changed: [testbed-manager] 2025-09-02 01:02:15.526232 | orchestrator | 2025-09-02 01:02:15.526241 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-02 01:02:15.526251 | orchestrator | Tuesday 02 September 2025 01:02:07 +0000 (0:00:06.291) 0:03:21.366 ***** 2025-09-02 01:02:15.526261 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:02:15.526270 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:02:15.526280 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:02:15.526289 | orchestrator | 2025-09-02 01:02:15.526299 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:02:15.526309 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-02 01:02:15.526319 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-02 01:02:15.526329 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-02 01:02:15.526339 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-02 01:02:15.526348 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-02 01:02:15.526358 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-02 01:02:15.526368 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-02 01:02:15.526377 | orchestrator | 2025-09-02 01:02:15.526387 | orchestrator | 2025-09-02 01:02:15.526396 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:02:15.526406 | orchestrator | Tuesday 02 September 2025 01:02:13 +0000 (0:00:05.677) 0:03:27.044 ***** 2025-09-02 01:02:15.526416 | orchestrator | =============================================================================== 2025-09-02 01:02:15.526425 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 28.24s 2025-09-02 01:02:15.526435 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.14s 2025-09-02 01:02:15.526450 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.48s 2025-09-02 01:02:15.526460 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.53s 2025-09-02 01:02:15.526470 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.80s 2025-09-02 01:02:15.526748 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.39s 2025-09-02 01:02:15.526813 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.27s 2025-09-02 01:02:15.526826 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.38s 2025-09-02 01:02:15.526836 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.29s 2025-09-02 01:02:15.526846 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.90s 2025-09-02 01:02:15.526856 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.68s 2025-09-02 01:02:15.526866 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.57s 2025-09-02 01:02:15.526875 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.33s 2025-09-02 01:02:15.526885 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.19s 2025-09-02 01:02:15.526895 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.23s 2025-09-02 01:02:15.526904 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.16s 2025-09-02 01:02:15.526914 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.89s 2025-09-02 01:02:15.526923 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.76s 2025-09-02 01:02:15.526933 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.76s 2025-09-02 01:02:15.526943 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.65s 2025-09-02 01:02:18.568773 | orchestrator | 2025-09-02 01:02:18 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:18.569930 | orchestrator | 2025-09-02 01:02:18 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:18.572013 | orchestrator | 2025-09-02 01:02:18 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:18.573204 | orchestrator | 2025-09-02 01:02:18 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:18.573252 | orchestrator | 2025-09-02 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:21.612982 | orchestrator | 2025-09-02 01:02:21 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:21.613088 | orchestrator | 2025-09-02 01:02:21 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:21.614402 | orchestrator | 2025-09-02 01:02:21 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:21.614428 | orchestrator | 2025-09-02 01:02:21 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:21.614440 | orchestrator | 2025-09-02 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:24.656658 | orchestrator | 2025-09-02 01:02:24 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:24.658381 | orchestrator | 2025-09-02 01:02:24 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:24.660480 | orchestrator | 2025-09-02 01:02:24 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:24.662142 | orchestrator | 2025-09-02 01:02:24 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:24.662781 | orchestrator | 2025-09-02 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:27.714619 | orchestrator | 2025-09-02 01:02:27 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:27.717683 | orchestrator | 2025-09-02 01:02:27 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:27.719997 | orchestrator | 2025-09-02 01:02:27 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:27.722314 | orchestrator | 2025-09-02 01:02:27 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:27.722702 | orchestrator | 2025-09-02 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:30.766915 | orchestrator | 2025-09-02 01:02:30 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:30.769646 | orchestrator | 2025-09-02 01:02:30 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:30.771296 | orchestrator | 2025-09-02 01:02:30 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:30.772954 | orchestrator | 2025-09-02 01:02:30 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:30.772978 | orchestrator | 2025-09-02 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:33.816998 | orchestrator | 2025-09-02 01:02:33 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:33.818750 | orchestrator | 2025-09-02 01:02:33 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:33.821281 | orchestrator | 2025-09-02 01:02:33 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:33.822253 | orchestrator | 2025-09-02 01:02:33 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:33.822627 | orchestrator | 2025-09-02 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:36.864114 | orchestrator | 2025-09-02 01:02:36 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:36.865705 | orchestrator | 2025-09-02 01:02:36 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:36.867075 | orchestrator | 2025-09-02 01:02:36 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:36.868431 | orchestrator | 2025-09-02 01:02:36 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:36.868463 | orchestrator | 2025-09-02 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:39.926241 | orchestrator | 2025-09-02 01:02:39 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:39.927652 | orchestrator | 2025-09-02 01:02:39 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:39.928843 | orchestrator | 2025-09-02 01:02:39 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:39.929767 | orchestrator | 2025-09-02 01:02:39 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:39.929819 | orchestrator | 2025-09-02 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:42.971143 | orchestrator | 2025-09-02 01:02:42 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:42.972074 | orchestrator | 2025-09-02 01:02:42 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:42.972387 | orchestrator | 2025-09-02 01:02:42 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:42.973434 | orchestrator | 2025-09-02 01:02:42 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:42.973756 | orchestrator | 2025-09-02 01:02:42 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:46.028589 | orchestrator | 2025-09-02 01:02:46 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:46.028687 | orchestrator | 2025-09-02 01:02:46 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:46.028701 | orchestrator | 2025-09-02 01:02:46 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:46.028712 | orchestrator | 2025-09-02 01:02:46 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:46.028724 | orchestrator | 2025-09-02 01:02:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:49.071822 | orchestrator | 2025-09-02 01:02:49 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:49.073833 | orchestrator | 2025-09-02 01:02:49 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:49.074949 | orchestrator | 2025-09-02 01:02:49 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:49.079149 | orchestrator | 2025-09-02 01:02:49 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:49.079361 | orchestrator | 2025-09-02 01:02:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:52.123314 | orchestrator | 2025-09-02 01:02:52 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:52.127411 | orchestrator | 2025-09-02 01:02:52 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:52.129993 | orchestrator | 2025-09-02 01:02:52 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:52.132480 | orchestrator | 2025-09-02 01:02:52 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:52.132810 | orchestrator | 2025-09-02 01:02:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:55.167234 | orchestrator | 2025-09-02 01:02:55 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:55.168412 | orchestrator | 2025-09-02 01:02:55 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:55.169529 | orchestrator | 2025-09-02 01:02:55 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:55.170511 | orchestrator | 2025-09-02 01:02:55 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:55.170658 | orchestrator | 2025-09-02 01:02:55 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:02:58.224233 | orchestrator | 2025-09-02 01:02:58 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:02:58.226678 | orchestrator | 2025-09-02 01:02:58 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:02:58.230403 | orchestrator | 2025-09-02 01:02:58 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state STARTED 2025-09-02 01:02:58.236623 | orchestrator | 2025-09-02 01:02:58 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:02:58.236671 | orchestrator | 2025-09-02 01:02:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:01.278098 | orchestrator | 2025-09-02 01:03:01 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:03:01.279703 | orchestrator | 2025-09-02 01:03:01 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:01.280579 | orchestrator | 2025-09-02 01:03:01 | INFO  | Task 55506ef5-d006-4145-af88-607c8bcdd335 is in state SUCCESS 2025-09-02 01:03:01.282096 | orchestrator | 2025-09-02 01:03:01 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:01.282240 | orchestrator | 2025-09-02 01:03:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:04.313893 | orchestrator | 2025-09-02 01:03:04 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:04.316239 | orchestrator | 2025-09-02 01:03:04 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:03:04.318462 | orchestrator | 2025-09-02 01:03:04 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:04.320619 | orchestrator | 2025-09-02 01:03:04 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:04.320725 | orchestrator | 2025-09-02 01:03:04 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:07.357390 | orchestrator | 2025-09-02 01:03:07 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:07.358122 | orchestrator | 2025-09-02 01:03:07 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:03:07.358469 | orchestrator | 2025-09-02 01:03:07 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:07.359853 | orchestrator | 2025-09-02 01:03:07 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:07.359878 | orchestrator | 2025-09-02 01:03:07 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:10.400329 | orchestrator | 2025-09-02 01:03:10 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:10.402119 | orchestrator | 2025-09-02 01:03:10 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state STARTED 2025-09-02 01:03:10.405149 | orchestrator | 2025-09-02 01:03:10 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:10.405900 | orchestrator | 2025-09-02 01:03:10 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:10.406799 | orchestrator | 2025-09-02 01:03:10 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:13.439889 | orchestrator | 2025-09-02 01:03:13 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:13.440911 | orchestrator | 2025-09-02 01:03:13 | INFO  | Task a8c43069-dd7a-4a7b-9884-86828c12493a is in state SUCCESS 2025-09-02 01:03:13.442218 | orchestrator | 2025-09-02 01:03:13.442286 | orchestrator | 2025-09-02 01:03:13.442300 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-02 01:03:13.442312 | orchestrator | 2025-09-02 01:03:13.442324 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-02 01:03:13.442335 | orchestrator | Tuesday 02 September 2025 01:01:07 +0000 (0:00:00.104) 0:00:00.104 ***** 2025-09-02 01:03:13.442347 | orchestrator | changed: [localhost] 2025-09-02 01:03:13.442359 | orchestrator | 2025-09-02 01:03:13.442371 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-02 01:03:13.442382 | orchestrator | Tuesday 02 September 2025 01:01:07 +0000 (0:00:00.814) 0:00:00.919 ***** 2025-09-02 01:03:13.442393 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-02 01:03:13.442405 | orchestrator | changed: [localhost] 2025-09-02 01:03:13.442416 | orchestrator | 2025-09-02 01:03:13.442427 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-02 01:03:13.442438 | orchestrator | Tuesday 02 September 2025 01:02:12 +0000 (0:01:05.041) 0:01:05.960 ***** 2025-09-02 01:03:13.442449 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-09-02 01:03:13.442461 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2025-09-02 01:03:13.442472 | orchestrator | changed: [localhost] 2025-09-02 01:03:13.442507 | orchestrator | 2025-09-02 01:03:13.442519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:03:13.442530 | orchestrator | 2025-09-02 01:03:13.442541 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:03:13.442584 | orchestrator | Tuesday 02 September 2025 01:03:00 +0000 (0:00:47.139) 0:01:53.100 ***** 2025-09-02 01:03:13.442605 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:03:13.442623 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:03:13.442639 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:03:13.442650 | orchestrator | 2025-09-02 01:03:13.442661 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:03:13.442673 | orchestrator | Tuesday 02 September 2025 01:03:00 +0000 (0:00:00.323) 0:01:53.423 ***** 2025-09-02 01:03:13.442684 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-02 01:03:13.442695 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-02 01:03:13.442706 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-02 01:03:13.442717 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-02 01:03:13.442728 | orchestrator | 2025-09-02 01:03:13.442739 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-02 01:03:13.442750 | orchestrator | skipping: no hosts matched 2025-09-02 01:03:13.442762 | orchestrator | 2025-09-02 01:03:13.442772 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:03:13.442784 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:13.442807 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:13.442819 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:13.442830 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:13.442841 | orchestrator | 2025-09-02 01:03:13.442852 | orchestrator | 2025-09-02 01:03:13.442863 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:03:13.442874 | orchestrator | Tuesday 02 September 2025 01:03:00 +0000 (0:00:00.448) 0:01:53.872 ***** 2025-09-02 01:03:13.442885 | orchestrator | =============================================================================== 2025-09-02 01:03:13.442896 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 65.04s 2025-09-02 01:03:13.442907 | orchestrator | Download ironic-agent kernel ------------------------------------------- 47.14s 2025-09-02 01:03:13.442925 | orchestrator | Ensure the destination directory exists --------------------------------- 0.81s 2025-09-02 01:03:13.442953 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-09-02 01:03:13.442975 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-09-02 01:03:13.442993 | orchestrator | 2025-09-02 01:03:13.443011 | orchestrator | 2025-09-02 01:03:13.443030 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:03:13.443049 | orchestrator | 2025-09-02 01:03:13.443068 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:03:13.443080 | orchestrator | Tuesday 02 September 2025 01:02:07 +0000 (0:00:00.283) 0:00:00.283 ***** 2025-09-02 01:03:13.443091 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:03:13.443102 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:03:13.443113 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:03:13.443124 | orchestrator | 2025-09-02 01:03:13.443135 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:03:13.443146 | orchestrator | Tuesday 02 September 2025 01:02:07 +0000 (0:00:00.391) 0:00:00.674 ***** 2025-09-02 01:03:13.443157 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-02 01:03:13.443179 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-02 01:03:13.443190 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-02 01:03:13.443201 | orchestrator | 2025-09-02 01:03:13.443212 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-02 01:03:13.443223 | orchestrator | 2025-09-02 01:03:13.443233 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-02 01:03:13.443244 | orchestrator | Tuesday 02 September 2025 01:02:08 +0000 (0:00:00.619) 0:00:01.293 ***** 2025-09-02 01:03:13.443268 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:03:13.443280 | orchestrator | 2025-09-02 01:03:13.443291 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-02 01:03:13.443302 | orchestrator | Tuesday 02 September 2025 01:02:09 +0000 (0:00:00.533) 0:00:01.827 ***** 2025-09-02 01:03:13.443312 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-02 01:03:13.443323 | orchestrator | 2025-09-02 01:03:13.443334 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-02 01:03:13.443345 | orchestrator | Tuesday 02 September 2025 01:02:12 +0000 (0:00:03.503) 0:00:05.330 ***** 2025-09-02 01:03:13.443356 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-02 01:03:13.443367 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-02 01:03:13.443378 | orchestrator | 2025-09-02 01:03:13.443388 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-02 01:03:13.443399 | orchestrator | Tuesday 02 September 2025 01:02:19 +0000 (0:00:06.530) 0:00:11.860 ***** 2025-09-02 01:03:13.443410 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-02 01:03:13.443421 | orchestrator | 2025-09-02 01:03:13.443432 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-02 01:03:13.443443 | orchestrator | Tuesday 02 September 2025 01:02:22 +0000 (0:00:03.564) 0:00:15.425 ***** 2025-09-02 01:03:13.443453 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:03:13.443464 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-02 01:03:13.443475 | orchestrator | 2025-09-02 01:03:13.443486 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-02 01:03:13.443496 | orchestrator | Tuesday 02 September 2025 01:02:26 +0000 (0:00:04.131) 0:00:19.556 ***** 2025-09-02 01:03:13.443507 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-02 01:03:13.443518 | orchestrator | 2025-09-02 01:03:13.443529 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-02 01:03:13.443540 | orchestrator | Tuesday 02 September 2025 01:02:30 +0000 (0:00:03.473) 0:00:23.029 ***** 2025-09-02 01:03:13.443569 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-02 01:03:13.443580 | orchestrator | 2025-09-02 01:03:13.443591 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-02 01:03:13.443602 | orchestrator | Tuesday 02 September 2025 01:02:34 +0000 (0:00:04.508) 0:00:27.538 ***** 2025-09-02 01:03:13.443613 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:13.443623 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:13.443635 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:13.443646 | orchestrator | 2025-09-02 01:03:13.443656 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-02 01:03:13.443667 | orchestrator | Tuesday 02 September 2025 01:02:35 +0000 (0:00:00.287) 0:00:27.825 ***** 2025-09-02 01:03:13.443688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.443710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.443731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.443743 | orchestrator | 2025-09-02 01:03:13.443754 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-02 01:03:13.443766 | orchestrator | Tuesday 02 September 2025 01:02:36 +0000 (0:00:00.954) 0:00:28.780 ***** 2025-09-02 01:03:13.443777 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:13.443788 | orchestrator | 2025-09-02 01:03:13.443798 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-02 01:03:13.443809 | orchestrator | Tuesday 02 September 2025 01:02:36 +0000 (0:00:00.149) 0:00:28.930 ***** 2025-09-02 01:03:13.443820 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:13.443874 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:13.443888 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:13.443923 | orchestrator | 2025-09-02 01:03:13.443935 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-02 01:03:13.443946 | orchestrator | Tuesday 02 September 2025 01:02:36 +0000 (0:00:00.461) 0:00:29.391 ***** 2025-09-02 01:03:13.443957 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:03:13.443968 | orchestrator | 2025-09-02 01:03:13.443978 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-02 01:03:13.443989 | orchestrator | Tuesday 02 September 2025 01:02:37 +0000 (0:00:00.536) 0:00:29.927 ***** 2025-09-02 01:03:13.444006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444057 | orchestrator | 2025-09-02 01:03:13.444068 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-02 01:03:13.444079 | orchestrator | Tuesday 02 September 2025 01:02:38 +0000 (0:00:01.577) 0:00:31.505 ***** 2025-09-02 01:03:13.444090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 01:03:13.444102 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:13.444118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 01:03:13.444136 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:13.444147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 01:03:13.444158 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:13.444169 | orchestrator | 2025-09-02 01:03:13.444180 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-02 01:03:13.444191 | orchestrator | Tuesday 02 September 2025 01:02:39 +0000 (0:00:00.913) 0:00:32.418 ***** 2025-09-02 01:03:13.444209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 01:03:13.444221 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:13.444232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 01:03:13.444243 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:13.444255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 01:03:13.444272 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:13.444282 | orchestrator | 2025-09-02 01:03:13.444298 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-02 01:03:13.444309 | orchestrator | Tuesday 02 September 2025 01:02:40 +0000 (0:00:00.705) 0:00:33.123 ***** 2025-09-02 01:03:13.444320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444363 | orchestrator | 2025-09-02 01:03:13.444374 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-02 01:03:13.444385 | orchestrator | Tuesday 02 September 2025 01:02:41 +0000 (0:00:01.362) 0:00:34.486 ***** 2025-09-02 01:03:13.444402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444442 | orchestrator | 2025-09-02 01:03:13.444453 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-02 01:03:13.444464 | orchestrator | Tuesday 02 September 2025 01:02:44 +0000 (0:00:02.393) 0:00:36.880 ***** 2025-09-02 01:03:13.444475 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-02 01:03:13.444492 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-02 01:03:13.444503 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-02 01:03:13.444514 | orchestrator | 2025-09-02 01:03:13.444525 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-02 01:03:13.444536 | orchestrator | Tuesday 02 September 2025 01:02:45 +0000 (0:00:01.707) 0:00:38.587 ***** 2025-09-02 01:03:13.444573 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:03:13.444584 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:13.444595 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:03:13.444606 | orchestrator | 2025-09-02 01:03:13.444617 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-02 01:03:13.444641 | orchestrator | Tuesday 02 September 2025 01:02:47 +0000 (0:00:01.453) 0:00:40.041 ***** 2025-09-02 01:03:13.444652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 01:03:13.444664 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:13.444680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 01:03:13.444691 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:13.444703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-02 01:03:13.444714 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:13.444725 | orchestrator | 2025-09-02 01:03:13.444736 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-02 01:03:13.444747 | orchestrator | Tuesday 02 September 2025 01:02:47 +0000 (0:00:00.578) 0:00:40.619 ***** 2025-09-02 01:03:13.444765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-02 01:03:13.444807 | orchestrator | 2025-09-02 01:03:13.444822 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-02 01:03:13.444833 | orchestrator | Tuesday 02 September 2025 01:02:49 +0000 (0:00:01.356) 0:00:41.976 ***** 2025-09-02 01:03:13.444844 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:13.444855 | orchestrator | 2025-09-02 01:03:13.444866 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-02 01:03:13.444877 | orchestrator | Tuesday 02 September 2025 01:02:51 +0000 (0:00:02.546) 0:00:44.522 ***** 2025-09-02 01:03:13.444888 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:13.444899 | orchestrator | 2025-09-02 01:03:13.444910 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-02 01:03:13.444921 | orchestrator | Tuesday 02 September 2025 01:02:54 +0000 (0:00:02.293) 0:00:46.816 ***** 2025-09-02 01:03:13.444932 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:13.444942 | orchestrator | 2025-09-02 01:03:13.444953 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-02 01:03:13.444964 | orchestrator | Tuesday 02 September 2025 01:03:07 +0000 (0:00:13.014) 0:00:59.830 ***** 2025-09-02 01:03:13.444975 | orchestrator | 2025-09-02 01:03:13.444986 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-02 01:03:13.444996 | orchestrator | Tuesday 02 September 2025 01:03:07 +0000 (0:00:00.070) 0:00:59.901 ***** 2025-09-02 01:03:13.445007 | orchestrator | 2025-09-02 01:03:13.445018 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-02 01:03:13.445029 | orchestrator | Tuesday 02 September 2025 01:03:07 +0000 (0:00:00.067) 0:00:59.968 ***** 2025-09-02 01:03:13.445040 | orchestrator | 2025-09-02 01:03:13.445050 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-02 01:03:13.445061 | orchestrator | Tuesday 02 September 2025 01:03:07 +0000 (0:00:00.074) 0:01:00.043 ***** 2025-09-02 01:03:13.445072 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:13.445089 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:03:13.445100 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:03:13.445111 | orchestrator | 2025-09-02 01:03:13.445122 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:03:13.445133 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 01:03:13.445144 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-02 01:03:13.445155 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-02 01:03:13.445166 | orchestrator | 2025-09-02 01:03:13.445177 | orchestrator | 2025-09-02 01:03:13.445195 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:03:13.445206 | orchestrator | Tuesday 02 September 2025 01:03:13 +0000 (0:00:05.682) 0:01:05.725 ***** 2025-09-02 01:03:13.445217 | orchestrator | =============================================================================== 2025-09-02 01:03:13.445228 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.01s 2025-09-02 01:03:13.445239 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.53s 2025-09-02 01:03:13.445249 | orchestrator | placement : Restart placement-api container ----------------------------- 5.68s 2025-09-02 01:03:13.445260 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.51s 2025-09-02 01:03:13.445271 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.13s 2025-09-02 01:03:13.445282 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.56s 2025-09-02 01:03:13.445293 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.50s 2025-09-02 01:03:13.445304 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.47s 2025-09-02 01:03:13.445315 | orchestrator | placement : Creating placement databases -------------------------------- 2.55s 2025-09-02 01:03:13.445325 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.39s 2025-09-02 01:03:13.445336 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.29s 2025-09-02 01:03:13.445347 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.71s 2025-09-02 01:03:13.445358 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.58s 2025-09-02 01:03:13.445368 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.45s 2025-09-02 01:03:13.445379 | orchestrator | placement : Copying over config.json files for services ----------------- 1.36s 2025-09-02 01:03:13.445390 | orchestrator | placement : Check placement containers ---------------------------------- 1.36s 2025-09-02 01:03:13.445401 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.95s 2025-09-02 01:03:13.445412 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.91s 2025-09-02 01:03:13.445422 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2025-09-02 01:03:13.445433 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-09-02 01:03:13.445594 | orchestrator | 2025-09-02 01:03:13 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:13.445612 | orchestrator | 2025-09-02 01:03:13 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:13.445630 | orchestrator | 2025-09-02 01:03:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:16.521086 | orchestrator | 2025-09-02 01:03:16 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:16.522187 | orchestrator | 2025-09-02 01:03:16 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:16.522451 | orchestrator | 2025-09-02 01:03:16 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:16.526106 | orchestrator | 2025-09-02 01:03:16 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:16.526130 | orchestrator | 2025-09-02 01:03:16 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:19.570652 | orchestrator | 2025-09-02 01:03:19 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:19.573378 | orchestrator | 2025-09-02 01:03:19 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:19.574338 | orchestrator | 2025-09-02 01:03:19 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:19.575384 | orchestrator | 2025-09-02 01:03:19 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:19.575407 | orchestrator | 2025-09-02 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:22.614438 | orchestrator | 2025-09-02 01:03:22 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:22.616051 | orchestrator | 2025-09-02 01:03:22 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:22.618389 | orchestrator | 2025-09-02 01:03:22 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:22.620486 | orchestrator | 2025-09-02 01:03:22 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:22.620654 | orchestrator | 2025-09-02 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:25.672197 | orchestrator | 2025-09-02 01:03:25 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:25.674200 | orchestrator | 2025-09-02 01:03:25 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:25.676252 | orchestrator | 2025-09-02 01:03:25 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:25.678175 | orchestrator | 2025-09-02 01:03:25 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:25.678202 | orchestrator | 2025-09-02 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:28.723992 | orchestrator | 2025-09-02 01:03:28 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:28.725110 | orchestrator | 2025-09-02 01:03:28 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:28.728041 | orchestrator | 2025-09-02 01:03:28 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:28.730621 | orchestrator | 2025-09-02 01:03:28 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:28.730651 | orchestrator | 2025-09-02 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:31.797006 | orchestrator | 2025-09-02 01:03:31 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:31.799966 | orchestrator | 2025-09-02 01:03:31 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:31.802235 | orchestrator | 2025-09-02 01:03:31 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:31.804278 | orchestrator | 2025-09-02 01:03:31 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:31.804415 | orchestrator | 2025-09-02 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:34.860183 | orchestrator | 2025-09-02 01:03:34 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state STARTED 2025-09-02 01:03:34.862477 | orchestrator | 2025-09-02 01:03:34 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:34.865407 | orchestrator | 2025-09-02 01:03:34 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:34.868149 | orchestrator | 2025-09-02 01:03:34 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:34.868194 | orchestrator | 2025-09-02 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:37.926061 | orchestrator | 2025-09-02 01:03:37 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:03:37.926856 | orchestrator | 2025-09-02 01:03:37 | INFO  | Task b04a9ca0-867b-47ac-b204-9610cd517942 is in state SUCCESS 2025-09-02 01:03:37.931493 | orchestrator | 2025-09-02 01:03:37 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:37.933961 | orchestrator | 2025-09-02 01:03:37 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:37.936336 | orchestrator | 2025-09-02 01:03:37 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:37.936838 | orchestrator | 2025-09-02 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:40.993164 | orchestrator | 2025-09-02 01:03:40 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:03:40.994594 | orchestrator | 2025-09-02 01:03:40 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:40.996356 | orchestrator | 2025-09-02 01:03:40 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:40.998159 | orchestrator | 2025-09-02 01:03:40 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:40.998186 | orchestrator | 2025-09-02 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:44.031609 | orchestrator | 2025-09-02 01:03:44 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:03:44.035134 | orchestrator | 2025-09-02 01:03:44 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:44.036235 | orchestrator | 2025-09-02 01:03:44 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:44.042928 | orchestrator | 2025-09-02 01:03:44 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state STARTED 2025-09-02 01:03:44.042970 | orchestrator | 2025-09-02 01:03:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:47.097847 | orchestrator | 2025-09-02 01:03:47 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:03:47.098124 | orchestrator | 2025-09-02 01:03:47 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:03:47.098961 | orchestrator | 2025-09-02 01:03:47 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:47.099807 | orchestrator | 2025-09-02 01:03:47 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:47.102101 | orchestrator | 2025-09-02 01:03:47 | INFO  | Task 1e1eee64-8989-4fbe-84f4-aa1d55e7db5e is in state SUCCESS 2025-09-02 01:03:47.103833 | orchestrator | 2025-09-02 01:03:47.103865 | orchestrator | 2025-09-02 01:03:47.103878 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:03:47.103892 | orchestrator | 2025-09-02 01:03:47.103904 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:03:47.103917 | orchestrator | Tuesday 02 September 2025 01:03:05 +0000 (0:00:00.293) 0:00:00.293 ***** 2025-09-02 01:03:47.103954 | orchestrator | ok: [testbed-manager] 2025-09-02 01:03:47.103968 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:03:47.103981 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:03:47.103993 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:03:47.104004 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:03:47.104016 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:03:47.104027 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:03:47.104039 | orchestrator | 2025-09-02 01:03:47.104051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:03:47.104062 | orchestrator | Tuesday 02 September 2025 01:03:06 +0000 (0:00:00.900) 0:00:01.194 ***** 2025-09-02 01:03:47.104128 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-02 01:03:47.104141 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-02 01:03:47.104152 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-02 01:03:47.104163 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-02 01:03:47.104174 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-02 01:03:47.104185 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-02 01:03:47.104196 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-02 01:03:47.104207 | orchestrator | 2025-09-02 01:03:47.104218 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-02 01:03:47.104228 | orchestrator | 2025-09-02 01:03:47.104239 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-02 01:03:47.104251 | orchestrator | Tuesday 02 September 2025 01:03:07 +0000 (0:00:00.807) 0:00:02.001 ***** 2025-09-02 01:03:47.104262 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:03:47.104275 | orchestrator | 2025-09-02 01:03:47.104286 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-02 01:03:47.104297 | orchestrator | Tuesday 02 September 2025 01:03:09 +0000 (0:00:02.076) 0:00:04.077 ***** 2025-09-02 01:03:47.104322 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-09-02 01:03:47.104334 | orchestrator | 2025-09-02 01:03:47.104345 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-02 01:03:47.104355 | orchestrator | Tuesday 02 September 2025 01:03:12 +0000 (0:00:03.269) 0:00:07.347 ***** 2025-09-02 01:03:47.104367 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-02 01:03:47.104464 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-02 01:03:47.104477 | orchestrator | 2025-09-02 01:03:47.104491 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-02 01:03:47.104504 | orchestrator | Tuesday 02 September 2025 01:03:18 +0000 (0:00:06.385) 0:00:13.733 ***** 2025-09-02 01:03:47.104517 | orchestrator | ok: [testbed-manager] => (item=service) 2025-09-02 01:03:47.104555 | orchestrator | 2025-09-02 01:03:47.104568 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-02 01:03:47.104581 | orchestrator | Tuesday 02 September 2025 01:03:21 +0000 (0:00:03.101) 0:00:16.835 ***** 2025-09-02 01:03:47.104594 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:03:47.104607 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-09-02 01:03:47.104620 | orchestrator | 2025-09-02 01:03:47.104633 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-02 01:03:47.104646 | orchestrator | Tuesday 02 September 2025 01:03:25 +0000 (0:00:03.673) 0:00:20.509 ***** 2025-09-02 01:03:47.104658 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-09-02 01:03:47.104671 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-09-02 01:03:47.104694 | orchestrator | 2025-09-02 01:03:47.104706 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-02 01:03:47.104721 | orchestrator | Tuesday 02 September 2025 01:03:31 +0000 (0:00:05.848) 0:00:26.357 ***** 2025-09-02 01:03:47.104734 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-09-02 01:03:47.104747 | orchestrator | 2025-09-02 01:03:47.104760 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:03:47.104772 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:47.104787 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:47.104801 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:47.104815 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:47.104827 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:47.104850 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:47.104862 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:03:47.104872 | orchestrator | 2025-09-02 01:03:47.104883 | orchestrator | 2025-09-02 01:03:47.104894 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:03:47.104905 | orchestrator | Tuesday 02 September 2025 01:03:35 +0000 (0:00:04.542) 0:00:30.899 ***** 2025-09-02 01:03:47.104916 | orchestrator | =============================================================================== 2025-09-02 01:03:47.104927 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.39s 2025-09-02 01:03:47.104938 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.85s 2025-09-02 01:03:47.104948 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.54s 2025-09-02 01:03:47.104959 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.67s 2025-09-02 01:03:47.104970 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.27s 2025-09-02 01:03:47.104981 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.10s 2025-09-02 01:03:47.104991 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.08s 2025-09-02 01:03:47.105002 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2025-09-02 01:03:47.105013 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2025-09-02 01:03:47.105024 | orchestrator | 2025-09-02 01:03:47.105035 | orchestrator | 2025-09-02 01:03:47.105046 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:03:47.105056 | orchestrator | 2025-09-02 01:03:47.105067 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:03:47.105078 | orchestrator | Tuesday 02 September 2025 00:58:55 +0000 (0:00:00.288) 0:00:00.288 ***** 2025-09-02 01:03:47.105089 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:03:47.105100 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:03:47.105111 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:03:47.105122 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:03:47.105133 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:03:47.105143 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:03:47.105154 | orchestrator | 2025-09-02 01:03:47.105165 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:03:47.105183 | orchestrator | Tuesday 02 September 2025 00:58:56 +0000 (0:00:00.849) 0:00:01.137 ***** 2025-09-02 01:03:47.105201 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-02 01:03:47.105212 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-02 01:03:47.105223 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-02 01:03:47.105234 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-02 01:03:47.105245 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-02 01:03:47.105256 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-02 01:03:47.105267 | orchestrator | 2025-09-02 01:03:47.105278 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-02 01:03:47.105289 | orchestrator | 2025-09-02 01:03:47.105300 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-02 01:03:47.105311 | orchestrator | Tuesday 02 September 2025 00:58:56 +0000 (0:00:00.956) 0:00:02.093 ***** 2025-09-02 01:03:47.105322 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 01:03:47.105333 | orchestrator | 2025-09-02 01:03:47.105344 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-02 01:03:47.105355 | orchestrator | Tuesday 02 September 2025 00:58:58 +0000 (0:00:01.631) 0:00:03.725 ***** 2025-09-02 01:03:47.105366 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:03:47.105377 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:03:47.105388 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:03:47.105399 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:03:47.105410 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:03:47.105420 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:03:47.105431 | orchestrator | 2025-09-02 01:03:47.105442 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-02 01:03:47.105453 | orchestrator | Tuesday 02 September 2025 00:59:00 +0000 (0:00:01.704) 0:00:05.429 ***** 2025-09-02 01:03:47.105464 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:03:47.105475 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:03:47.105486 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:03:47.105497 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:03:47.105508 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:03:47.105518 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:03:47.105544 | orchestrator | 2025-09-02 01:03:47.105555 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-02 01:03:47.105566 | orchestrator | Tuesday 02 September 2025 00:59:01 +0000 (0:00:01.219) 0:00:06.649 ***** 2025-09-02 01:03:47.105577 | orchestrator | ok: [testbed-node-0] => { 2025-09-02 01:03:47.105589 | orchestrator |  "changed": false, 2025-09-02 01:03:47.105599 | orchestrator |  "msg": "All assertions passed" 2025-09-02 01:03:47.105611 | orchestrator | } 2025-09-02 01:03:47.105622 | orchestrator | ok: [testbed-node-1] => { 2025-09-02 01:03:47.105633 | orchestrator |  "changed": false, 2025-09-02 01:03:47.105644 | orchestrator |  "msg": "All assertions passed" 2025-09-02 01:03:47.105655 | orchestrator | } 2025-09-02 01:03:47.105666 | orchestrator | ok: [testbed-node-2] => { 2025-09-02 01:03:47.105677 | orchestrator |  "changed": false, 2025-09-02 01:03:47.105687 | orchestrator |  "msg": "All assertions passed" 2025-09-02 01:03:47.105698 | orchestrator | } 2025-09-02 01:03:47.105709 | orchestrator | ok: [testbed-node-3] => { 2025-09-02 01:03:47.105720 | orchestrator |  "changed": false, 2025-09-02 01:03:47.105731 | orchestrator |  "msg": "All assertions passed" 2025-09-02 01:03:47.105742 | orchestrator | } 2025-09-02 01:03:47.105753 | orchestrator | ok: [testbed-node-4] => { 2025-09-02 01:03:47.105763 | orchestrator |  "changed": false, 2025-09-02 01:03:47.105775 | orchestrator |  "msg": "All assertions passed" 2025-09-02 01:03:47.105785 | orchestrator | } 2025-09-02 01:03:47.105796 | orchestrator | ok: [testbed-node-5] => { 2025-09-02 01:03:47.105813 | orchestrator |  "changed": false, 2025-09-02 01:03:47.105824 | orchestrator |  "msg": "All assertions passed" 2025-09-02 01:03:47.105835 | orchestrator | } 2025-09-02 01:03:47.105853 | orchestrator | 2025-09-02 01:03:47.105865 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-02 01:03:47.105875 | orchestrator | Tuesday 02 September 2025 00:59:02 +0000 (0:00:00.780) 0:00:07.429 ***** 2025-09-02 01:03:47.105886 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.105897 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.105908 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.105919 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.105929 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.105940 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.105951 | orchestrator | 2025-09-02 01:03:47.105962 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-02 01:03:47.105973 | orchestrator | Tuesday 02 September 2025 00:59:02 +0000 (0:00:00.609) 0:00:08.039 ***** 2025-09-02 01:03:47.105984 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-02 01:03:47.105995 | orchestrator | 2025-09-02 01:03:47.106005 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-02 01:03:47.106055 | orchestrator | Tuesday 02 September 2025 00:59:06 +0000 (0:00:03.540) 0:00:11.580 ***** 2025-09-02 01:03:47.106069 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-02 01:03:47.106082 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-02 01:03:47.106093 | orchestrator | 2025-09-02 01:03:47.106104 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-02 01:03:47.106116 | orchestrator | Tuesday 02 September 2025 00:59:12 +0000 (0:00:05.833) 0:00:17.413 ***** 2025-09-02 01:03:47.106127 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-02 01:03:47.106138 | orchestrator | 2025-09-02 01:03:47.106149 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-02 01:03:47.106160 | orchestrator | Tuesday 02 September 2025 00:59:15 +0000 (0:00:03.004) 0:00:20.417 ***** 2025-09-02 01:03:47.106171 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:03:47.106182 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-02 01:03:47.106193 | orchestrator | 2025-09-02 01:03:47.106204 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-02 01:03:47.106221 | orchestrator | Tuesday 02 September 2025 00:59:19 +0000 (0:00:03.862) 0:00:24.279 ***** 2025-09-02 01:03:47.106232 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-02 01:03:47.106244 | orchestrator | 2025-09-02 01:03:47.106255 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-02 01:03:47.106266 | orchestrator | Tuesday 02 September 2025 00:59:23 +0000 (0:00:04.555) 0:00:28.835 ***** 2025-09-02 01:03:47.106277 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-02 01:03:47.106288 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-02 01:03:47.106299 | orchestrator | 2025-09-02 01:03:47.106310 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-02 01:03:47.106321 | orchestrator | Tuesday 02 September 2025 00:59:31 +0000 (0:00:07.477) 0:00:36.312 ***** 2025-09-02 01:03:47.106332 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.106343 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.106354 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.106365 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.106376 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.106387 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.106398 | orchestrator | 2025-09-02 01:03:47.106409 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-02 01:03:47.106420 | orchestrator | Tuesday 02 September 2025 00:59:31 +0000 (0:00:00.738) 0:00:37.051 ***** 2025-09-02 01:03:47.106431 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.106442 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.106459 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.106471 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.106481 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.106492 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.106503 | orchestrator | 2025-09-02 01:03:47.106514 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-02 01:03:47.106583 | orchestrator | Tuesday 02 September 2025 00:59:34 +0000 (0:00:02.725) 0:00:39.776 ***** 2025-09-02 01:03:47.106598 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:03:47.106609 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:03:47.106620 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:03:47.106631 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:03:47.106642 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:03:47.106653 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:03:47.106665 | orchestrator | 2025-09-02 01:03:47.106676 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-02 01:03:47.106687 | orchestrator | Tuesday 02 September 2025 00:59:36 +0000 (0:00:01.816) 0:00:41.593 ***** 2025-09-02 01:03:47.106698 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.106709 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.106720 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.106730 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.106741 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.106752 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.106763 | orchestrator | 2025-09-02 01:03:47.106774 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-02 01:03:47.106786 | orchestrator | Tuesday 02 September 2025 00:59:39 +0000 (0:00:02.571) 0:00:44.164 ***** 2025-09-02 01:03:47.106810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.106826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.106845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.106866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.106878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.106995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.107010 | orchestrator | 2025-09-02 01:03:47.107021 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-02 01:03:47.107033 | orchestrator | Tuesday 02 September 2025 00:59:42 +0000 (0:00:03.170) 0:00:47.335 ***** 2025-09-02 01:03:47.107044 | orchestrator | [WARNING]: Skipped 2025-09-02 01:03:47.107098 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-02 01:03:47.107110 | orchestrator | due to this access issue: 2025-09-02 01:03:47.107121 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-02 01:03:47.107133 | orchestrator | a directory 2025-09-02 01:03:47.107144 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 01:03:47.107155 | orchestrator | 2025-09-02 01:03:47.107166 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-02 01:03:47.107177 | orchestrator | Tuesday 02 September 2025 00:59:44 +0000 (0:00:01.913) 0:00:49.248 ***** 2025-09-02 01:03:47.107188 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 01:03:47.107200 | orchestrator | 2025-09-02 01:03:47.107211 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-02 01:03:47.107230 | orchestrator | Tuesday 02 September 2025 00:59:45 +0000 (0:00:01.770) 0:00:51.019 ***** 2025-09-02 01:03:47.107248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.107260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.107280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.107293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.107304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.107330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.107342 | orchestrator | 2025-09-02 01:03:47.107353 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-02 01:03:47.107364 | orchestrator | Tuesday 02 September 2025 00:59:50 +0000 (0:00:04.893) 0:00:55.912 ***** 2025-09-02 01:03:47.107375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.107387 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.107405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.107417 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.107429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.107454 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.107471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.107484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.107496 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.107507 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.107518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.107550 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.107561 | orchestrator | 2025-09-02 01:03:47.107572 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-02 01:03:47.107583 | orchestrator | Tuesday 02 September 2025 00:59:54 +0000 (0:00:03.347) 0:00:59.259 ***** 2025-09-02 01:03:47.107604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.107616 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.107628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.107647 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.107663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.107675 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.107686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.107698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.107709 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.107721 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.107740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.107758 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.107770 | orchestrator | 2025-09-02 01:03:47.107781 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-02 01:03:47.107792 | orchestrator | Tuesday 02 September 2025 00:59:57 +0000 (0:00:03.537) 0:01:02.797 ***** 2025-09-02 01:03:47.107803 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.107814 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.107825 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.107836 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.107847 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.107858 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.107869 | orchestrator | 2025-09-02 01:03:47.107880 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-02 01:03:47.107891 | orchestrator | Tuesday 02 September 2025 01:00:00 +0000 (0:00:03.046) 0:01:05.844 ***** 2025-09-02 01:03:47.107902 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.107913 | orchestrator | 2025-09-02 01:03:47.107924 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-02 01:03:47.107935 | orchestrator | Tuesday 02 September 2025 01:00:00 +0000 (0:00:00.121) 0:01:05.966 ***** 2025-09-02 01:03:47.107946 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.107957 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.107968 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.107979 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.107990 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.108001 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.108012 | orchestrator | 2025-09-02 01:03:47.108023 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-02 01:03:47.108039 | orchestrator | Tuesday 02 September 2025 01:00:01 +0000 (0:00:00.790) 0:01:06.757 ***** 2025-09-02 01:03:47.108051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.108062 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.108073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.108092 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.108112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.108124 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.108136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.108147 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.108163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.108175 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.108186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.108198 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.108209 | orchestrator | 2025-09-02 01:03:47.108220 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-02 01:03:47.108231 | orchestrator | Tuesday 02 September 2025 01:00:05 +0000 (0:00:03.616) 0:01:10.373 ***** 2025-09-02 01:03:47.108242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.108271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.108284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.108300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.108313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.108325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.108342 | orchestrator | 2025-09-02 01:03:47.108354 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-02 01:03:47.108365 | orchestrator | Tuesday 02 September 2025 01:00:09 +0000 (0:00:04.655) 0:01:15.029 ***** 2025-09-02 01:03:47.108384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.108397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.108414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.108426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.108443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.108463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.108475 | orchestrator | 2025-09-02 01:03:47.108486 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-02 01:03:47.108498 | orchestrator | Tuesday 02 September 2025 01:00:17 +0000 (0:00:07.205) 0:01:22.235 ***** 2025-09-02 01:03:47.108515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.108584 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.108596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.108615 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.108627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.108638 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.108658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.108670 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.108681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.108693 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.108712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.108724 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.108735 | orchestrator | 2025-09-02 01:03:47.108746 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-02 01:03:47.108758 | orchestrator | Tuesday 02 September 2025 01:00:20 +0000 (0:00:03.304) 0:01:25.540 ***** 2025-09-02 01:03:47.108769 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.108780 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.108790 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.108801 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:03:47.108820 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:47.108831 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:03:47.108842 | orchestrator | 2025-09-02 01:03:47.108853 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-02 01:03:47.108864 | orchestrator | Tuesday 02 September 2025 01:00:23 +0000 (0:00:03.083) 0:01:28.624 ***** 2025-09-02 01:03:47.108876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.108887 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.108898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.108910 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.108929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.108941 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.108953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.108970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.108988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.109000 | orchestrator | 2025-09-02 01:03:47.109011 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-02 01:03:47.109022 | orchestrator | Tuesday 02 September 2025 01:00:28 +0000 (0:00:05.239) 0:01:33.863 ***** 2025-09-02 01:03:47.109034 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.109045 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.109056 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.109067 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.109078 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.109089 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.109100 | orchestrator | 2025-09-02 01:03:47.109111 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-02 01:03:47.109122 | orchestrator | Tuesday 02 September 2025 01:00:32 +0000 (0:00:03.492) 0:01:37.356 ***** 2025-09-02 01:03:47.109133 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.109144 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.109155 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.109166 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.109177 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.109194 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.109206 | orchestrator | 2025-09-02 01:03:47.109217 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-02 01:03:47.109228 | orchestrator | Tuesday 02 September 2025 01:00:35 +0000 (0:00:02.816) 0:01:40.172 ***** 2025-09-02 01:03:47.109239 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.109251 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.109262 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.109272 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.109283 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.109294 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.109305 | orchestrator | 2025-09-02 01:03:47.109316 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-02 01:03:47.109327 | orchestrator | Tuesday 02 September 2025 01:00:38 +0000 (0:00:03.197) 0:01:43.370 ***** 2025-09-02 01:03:47.109338 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.109349 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.109360 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.109371 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.109382 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.109399 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.109410 | orchestrator | 2025-09-02 01:03:47.109422 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-02 01:03:47.109433 | orchestrator | Tuesday 02 September 2025 01:00:42 +0000 (0:00:03.758) 0:01:47.128 ***** 2025-09-02 01:03:47.109444 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.109455 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.109465 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.109476 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.109487 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.109498 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.109509 | orchestrator | 2025-09-02 01:03:47.109520 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-02 01:03:47.109547 | orchestrator | Tuesday 02 September 2025 01:00:44 +0000 (0:00:02.459) 0:01:49.588 ***** 2025-09-02 01:03:47.109558 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.109570 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.109581 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.109591 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.109602 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.109613 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.109624 | orchestrator | 2025-09-02 01:03:47.109635 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-02 01:03:47.109651 | orchestrator | Tuesday 02 September 2025 01:00:46 +0000 (0:00:02.198) 0:01:51.786 ***** 2025-09-02 01:03:47.109662 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-02 01:03:47.109673 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.109685 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-02 01:03:47.109696 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.109707 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-02 01:03:47.109718 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.109729 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-02 01:03:47.109740 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.109752 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-02 01:03:47.109763 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.109774 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-02 01:03:47.109785 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.109796 | orchestrator | 2025-09-02 01:03:47.109807 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-02 01:03:47.109818 | orchestrator | Tuesday 02 September 2025 01:00:49 +0000 (0:00:02.695) 0:01:54.482 ***** 2025-09-02 01:03:47.109830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.109841 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.110159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.110187 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.110198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.110210 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.110227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.110239 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.110250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.110261 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.110273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.110293 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.110304 | orchestrator | 2025-09-02 01:03:47.110315 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-02 01:03:47.110326 | orchestrator | Tuesday 02 September 2025 01:00:53 +0000 (0:00:04.138) 0:01:58.620 ***** 2025-09-02 01:03:47.110344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.110356 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.110367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.110384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.110396 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.110407 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.110418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.110436 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.110452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.110464 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.110475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.110486 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.110497 | orchestrator | 2025-09-02 01:03:47.110508 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-02 01:03:47.110519 | orchestrator | Tuesday 02 September 2025 01:00:56 +0000 (0:00:03.268) 0:02:01.889 ***** 2025-09-02 01:03:47.110563 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.110574 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.110585 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.110596 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.110607 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.110618 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.110629 | orchestrator | 2025-09-02 01:03:47.110640 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-02 01:03:47.110651 | orchestrator | Tuesday 02 September 2025 01:01:03 +0000 (0:00:06.443) 0:02:08.333 ***** 2025-09-02 01:03:47.110662 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.110673 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.110684 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.110695 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:03:47.110705 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:03:47.110716 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:03:47.110727 | orchestrator | 2025-09-02 01:03:47.110743 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-02 01:03:47.110755 | orchestrator | Tuesday 02 September 2025 01:01:08 +0000 (0:00:04.814) 0:02:13.148 ***** 2025-09-02 01:03:47.110768 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.110781 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.110794 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.110806 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.110819 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.110831 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.110843 | orchestrator | 2025-09-02 01:03:47.110871 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-02 01:03:47.110901 | orchestrator | Tuesday 02 September 2025 01:01:10 +0000 (0:00:02.440) 0:02:15.589 ***** 2025-09-02 01:03:47.110916 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.110928 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.110940 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.110953 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.110965 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.110977 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.110990 | orchestrator | 2025-09-02 01:03:47.111002 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-02 01:03:47.111015 | orchestrator | Tuesday 02 September 2025 01:01:12 +0000 (0:00:01.771) 0:02:17.361 ***** 2025-09-02 01:03:47.111028 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.111041 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.111053 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.111065 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.111077 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.111090 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.111104 | orchestrator | 2025-09-02 01:03:47.111117 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-02 01:03:47.111128 | orchestrator | Tuesday 02 September 2025 01:01:14 +0000 (0:00:02.608) 0:02:19.970 ***** 2025-09-02 01:03:47.111139 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.111150 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.111161 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.111173 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.111184 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.111195 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.111206 | orchestrator | 2025-09-02 01:03:47.111217 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-02 01:03:47.111228 | orchestrator | Tuesday 02 September 2025 01:01:18 +0000 (0:00:03.857) 0:02:23.827 ***** 2025-09-02 01:03:47.111239 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.111249 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.111260 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.111271 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.111282 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.111293 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.111304 | orchestrator | 2025-09-02 01:03:47.111315 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-02 01:03:47.111326 | orchestrator | Tuesday 02 September 2025 01:01:20 +0000 (0:00:01.710) 0:02:25.537 ***** 2025-09-02 01:03:47.111337 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.111348 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.111359 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.111370 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.111381 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.111392 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.111403 | orchestrator | 2025-09-02 01:03:47.111420 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-02 01:03:47.111431 | orchestrator | Tuesday 02 September 2025 01:01:22 +0000 (0:00:01.720) 0:02:27.257 ***** 2025-09-02 01:03:47.111443 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.111454 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.111465 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.111476 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.111487 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.111498 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.111509 | orchestrator | 2025-09-02 01:03:47.111520 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-02 01:03:47.111550 | orchestrator | Tuesday 02 September 2025 01:01:23 +0000 (0:00:01.569) 0:02:28.827 ***** 2025-09-02 01:03:47.111562 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-02 01:03:47.111580 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-02 01:03:47.111591 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.111602 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.111613 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-02 01:03:47.111625 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.111636 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-02 01:03:47.111647 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.111658 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-02 01:03:47.111669 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.111680 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-02 01:03:47.111692 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.111703 | orchestrator | 2025-09-02 01:03:47.111714 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-02 01:03:47.111725 | orchestrator | Tuesday 02 September 2025 01:01:27 +0000 (0:00:03.429) 0:02:32.256 ***** 2025-09-02 01:03:47.111741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.111754 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.111765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.111776 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.111794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-02 01:03:47.111816 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.111828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.111840 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.111856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.111868 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.111879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-02 01:03:47.111891 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.111902 | orchestrator | 2025-09-02 01:03:47.111913 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-02 01:03:47.111924 | orchestrator | Tuesday 02 September 2025 01:01:29 +0000 (0:00:02.816) 0:02:35.073 ***** 2025-09-02 01:03:47.111935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.111953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.111972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-02 01:03:47.111989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.112002 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.112014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-02 01:03:47.112025 | orchestrator | 2025-09-02 01:03:47.112036 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-02 01:03:47.112054 | orchestrator | Tuesday 02 September 2025 01:01:33 +0000 (0:00:03.850) 0:02:38.924 ***** 2025-09-02 01:03:47.112065 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:03:47.112077 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:03:47.112088 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:03:47.112099 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:03:47.112110 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:03:47.112121 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:03:47.112132 | orchestrator | 2025-09-02 01:03:47.112143 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-02 01:03:47.112155 | orchestrator | Tuesday 02 September 2025 01:01:34 +0000 (0:00:01.160) 0:02:40.084 ***** 2025-09-02 01:03:47.112171 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:47.112183 | orchestrator | 2025-09-02 01:03:47.112194 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-02 01:03:47.112205 | orchestrator | Tuesday 02 September 2025 01:01:37 +0000 (0:00:02.357) 0:02:42.442 ***** 2025-09-02 01:03:47.112216 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:47.112227 | orchestrator | 2025-09-02 01:03:47.112238 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-02 01:03:47.112249 | orchestrator | Tuesday 02 September 2025 01:01:39 +0000 (0:00:02.632) 0:02:45.074 ***** 2025-09-02 01:03:47.112260 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:47.112271 | orchestrator | 2025-09-02 01:03:47.112282 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-02 01:03:47.112293 | orchestrator | Tuesday 02 September 2025 01:02:20 +0000 (0:00:40.901) 0:03:25.976 ***** 2025-09-02 01:03:47.112304 | orchestrator | 2025-09-02 01:03:47.112315 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-02 01:03:47.112326 | orchestrator | Tuesday 02 September 2025 01:02:20 +0000 (0:00:00.065) 0:03:26.041 ***** 2025-09-02 01:03:47.112337 | orchestrator | 2025-09-02 01:03:47.112348 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-02 01:03:47.112359 | orchestrator | Tuesday 02 September 2025 01:02:21 +0000 (0:00:00.238) 0:03:26.280 ***** 2025-09-02 01:03:47.112370 | orchestrator | 2025-09-02 01:03:47.112381 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-02 01:03:47.112392 | orchestrator | Tuesday 02 September 2025 01:02:21 +0000 (0:00:00.064) 0:03:26.344 ***** 2025-09-02 01:03:47.112403 | orchestrator | 2025-09-02 01:03:47.112414 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-02 01:03:47.112425 | orchestrator | Tuesday 02 September 2025 01:02:21 +0000 (0:00:00.070) 0:03:26.415 ***** 2025-09-02 01:03:47.112436 | orchestrator | 2025-09-02 01:03:47.112447 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-02 01:03:47.112458 | orchestrator | Tuesday 02 September 2025 01:02:21 +0000 (0:00:00.064) 0:03:26.479 ***** 2025-09-02 01:03:47.112469 | orchestrator | 2025-09-02 01:03:47.112480 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-02 01:03:47.112491 | orchestrator | Tuesday 02 September 2025 01:02:21 +0000 (0:00:00.067) 0:03:26.546 ***** 2025-09-02 01:03:47.112502 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:03:47.112513 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:03:47.112542 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:03:47.112554 | orchestrator | 2025-09-02 01:03:47.112570 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-02 01:03:47.112581 | orchestrator | Tuesday 02 September 2025 01:02:45 +0000 (0:00:23.846) 0:03:50.393 ***** 2025-09-02 01:03:47.112592 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:03:47.112603 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:03:47.112615 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:03:47.112626 | orchestrator | 2025-09-02 01:03:47.112637 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:03:47.112648 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-02 01:03:47.112669 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-02 01:03:47.112681 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-02 01:03:47.112692 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-02 01:03:47.112704 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-02 01:03:47.112715 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-02 01:03:47.112726 | orchestrator | 2025-09-02 01:03:47.112737 | orchestrator | 2025-09-02 01:03:47.112748 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:03:47.112759 | orchestrator | Tuesday 02 September 2025 01:03:43 +0000 (0:00:58.255) 0:04:48.648 ***** 2025-09-02 01:03:47.112770 | orchestrator | =============================================================================== 2025-09-02 01:03:47.112781 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 58.26s 2025-09-02 01:03:47.112792 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.90s 2025-09-02 01:03:47.112803 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.85s 2025-09-02 01:03:47.112814 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.48s 2025-09-02 01:03:47.112825 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.21s 2025-09-02 01:03:47.112836 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 6.45s 2025-09-02 01:03:47.112847 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.83s 2025-09-02 01:03:47.112858 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.24s 2025-09-02 01:03:47.112869 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.89s 2025-09-02 01:03:47.112880 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.81s 2025-09-02 01:03:47.112896 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.66s 2025-09-02 01:03:47.112908 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.56s 2025-09-02 01:03:47.112919 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 4.14s 2025-09-02 01:03:47.112930 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.86s 2025-09-02 01:03:47.112941 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.86s 2025-09-02 01:03:47.112952 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.85s 2025-09-02 01:03:47.112962 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 3.76s 2025-09-02 01:03:47.112973 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.62s 2025-09-02 01:03:47.112984 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.54s 2025-09-02 01:03:47.112995 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.54s 2025-09-02 01:03:47.113006 | orchestrator | 2025-09-02 01:03:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:50.152113 | orchestrator | 2025-09-02 01:03:50 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:03:50.152599 | orchestrator | 2025-09-02 01:03:50 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:03:50.153349 | orchestrator | 2025-09-02 01:03:50 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:50.154320 | orchestrator | 2025-09-02 01:03:50 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:50.154342 | orchestrator | 2025-09-02 01:03:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:53.186453 | orchestrator | 2025-09-02 01:03:53 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:03:53.186794 | orchestrator | 2025-09-02 01:03:53 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:03:53.188127 | orchestrator | 2025-09-02 01:03:53 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:53.188152 | orchestrator | 2025-09-02 01:03:53 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:53.188165 | orchestrator | 2025-09-02 01:03:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:56.224689 | orchestrator | 2025-09-02 01:03:56 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:03:56.226746 | orchestrator | 2025-09-02 01:03:56 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:03:56.229429 | orchestrator | 2025-09-02 01:03:56 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:56.229997 | orchestrator | 2025-09-02 01:03:56 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:56.230068 | orchestrator | 2025-09-02 01:03:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:03:59.265254 | orchestrator | 2025-09-02 01:03:59 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:03:59.265996 | orchestrator | 2025-09-02 01:03:59 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:03:59.266894 | orchestrator | 2025-09-02 01:03:59 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:03:59.268392 | orchestrator | 2025-09-02 01:03:59 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:03:59.268417 | orchestrator | 2025-09-02 01:03:59 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:02.307150 | orchestrator | 2025-09-02 01:04:02 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:02.308864 | orchestrator | 2025-09-02 01:04:02 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:02.310352 | orchestrator | 2025-09-02 01:04:02 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:04:02.314306 | orchestrator | 2025-09-02 01:04:02 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:02.314332 | orchestrator | 2025-09-02 01:04:02 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:05.355079 | orchestrator | 2025-09-02 01:04:05 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:05.357755 | orchestrator | 2025-09-02 01:04:05 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:05.358610 | orchestrator | 2025-09-02 01:04:05 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state STARTED 2025-09-02 01:04:05.360625 | orchestrator | 2025-09-02 01:04:05 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:05.360661 | orchestrator | 2025-09-02 01:04:05 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:08.397474 | orchestrator | 2025-09-02 01:04:08 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:08.397655 | orchestrator | 2025-09-02 01:04:08 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:08.398656 | orchestrator | 2025-09-02 01:04:08 | INFO  | Task 97bf8787-1aaf-48a7-bc70-40a18f2fdfb1 is in state SUCCESS 2025-09-02 01:04:08.400341 | orchestrator | 2025-09-02 01:04:08.400377 | orchestrator | 2025-09-02 01:04:08.400389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:04:08.400401 | orchestrator | 2025-09-02 01:04:08.400412 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:04:08.400424 | orchestrator | Tuesday 02 September 2025 01:02:17 +0000 (0:00:00.265) 0:00:00.265 ***** 2025-09-02 01:04:08.400435 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:04:08.400448 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:04:08.400459 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:04:08.400470 | orchestrator | 2025-09-02 01:04:08.400489 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:04:08.400507 | orchestrator | Tuesday 02 September 2025 01:02:18 +0000 (0:00:00.327) 0:00:00.592 ***** 2025-09-02 01:04:08.400557 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-02 01:04:08.400576 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-02 01:04:08.400595 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-02 01:04:08.400615 | orchestrator | 2025-09-02 01:04:08.400628 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-02 01:04:08.400639 | orchestrator | 2025-09-02 01:04:08.400650 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-02 01:04:08.400661 | orchestrator | Tuesday 02 September 2025 01:02:18 +0000 (0:00:00.440) 0:00:01.032 ***** 2025-09-02 01:04:08.400673 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:04:08.400685 | orchestrator | 2025-09-02 01:04:08.400713 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-02 01:04:08.400724 | orchestrator | Tuesday 02 September 2025 01:02:19 +0000 (0:00:00.540) 0:00:01.573 ***** 2025-09-02 01:04:08.400736 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-02 01:04:08.400747 | orchestrator | 2025-09-02 01:04:08.400758 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-02 01:04:08.400769 | orchestrator | Tuesday 02 September 2025 01:02:22 +0000 (0:00:03.600) 0:00:05.174 ***** 2025-09-02 01:04:08.400779 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-02 01:04:08.400791 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-02 01:04:08.400801 | orchestrator | 2025-09-02 01:04:08.400812 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-02 01:04:08.400823 | orchestrator | Tuesday 02 September 2025 01:02:29 +0000 (0:00:06.751) 0:00:11.926 ***** 2025-09-02 01:04:08.400834 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-02 01:04:08.400845 | orchestrator | 2025-09-02 01:04:08.401013 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-02 01:04:08.401026 | orchestrator | Tuesday 02 September 2025 01:02:33 +0000 (0:00:03.441) 0:00:15.367 ***** 2025-09-02 01:04:08.401038 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:04:08.401050 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-02 01:04:08.401062 | orchestrator | 2025-09-02 01:04:08.401074 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-02 01:04:08.401086 | orchestrator | Tuesday 02 September 2025 01:02:37 +0000 (0:00:04.065) 0:00:19.433 ***** 2025-09-02 01:04:08.401098 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-02 01:04:08.401110 | orchestrator | 2025-09-02 01:04:08.401121 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-02 01:04:08.401149 | orchestrator | Tuesday 02 September 2025 01:02:40 +0000 (0:00:03.339) 0:00:22.773 ***** 2025-09-02 01:04:08.401161 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-02 01:04:08.401172 | orchestrator | 2025-09-02 01:04:08.401184 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-02 01:04:08.401196 | orchestrator | Tuesday 02 September 2025 01:02:44 +0000 (0:00:04.427) 0:00:27.200 ***** 2025-09-02 01:04:08.401207 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:04:08.401219 | orchestrator | 2025-09-02 01:04:08.401232 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-02 01:04:08.401244 | orchestrator | Tuesday 02 September 2025 01:02:48 +0000 (0:00:03.426) 0:00:30.627 ***** 2025-09-02 01:04:08.401256 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:04:08.401268 | orchestrator | 2025-09-02 01:04:08.401280 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-02 01:04:08.401291 | orchestrator | Tuesday 02 September 2025 01:02:52 +0000 (0:00:04.027) 0:00:34.654 ***** 2025-09-02 01:04:08.401303 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:04:08.401315 | orchestrator | 2025-09-02 01:04:08.401326 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-02 01:04:08.401338 | orchestrator | Tuesday 02 September 2025 01:02:56 +0000 (0:00:03.895) 0:00:38.550 ***** 2025-09-02 01:04:08.401369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.401386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.401405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.401426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.401439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.401459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.401472 | orchestrator | 2025-09-02 01:04:08.401484 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-02 01:04:08.401496 | orchestrator | Tuesday 02 September 2025 01:02:57 +0000 (0:00:01.774) 0:00:40.325 ***** 2025-09-02 01:04:08.401508 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:04:08.401547 | orchestrator | 2025-09-02 01:04:08.401558 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-02 01:04:08.401569 | orchestrator | Tuesday 02 September 2025 01:02:58 +0000 (0:00:00.145) 0:00:40.470 ***** 2025-09-02 01:04:08.401586 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:04:08.401605 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:04:08.401625 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:04:08.401646 | orchestrator | 2025-09-02 01:04:08.401666 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-02 01:04:08.401685 | orchestrator | Tuesday 02 September 2025 01:02:58 +0000 (0:00:00.530) 0:00:41.001 ***** 2025-09-02 01:04:08.401700 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 01:04:08.401714 | orchestrator | 2025-09-02 01:04:08.401728 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-02 01:04:08.401740 | orchestrator | Tuesday 02 September 2025 01:02:59 +0000 (0:00:00.970) 0:00:41.971 ***** 2025-09-02 01:04:08.401760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.401783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.401797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.401822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.401837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.401857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.401877 | orchestrator | 2025-09-02 01:04:08.401889 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-02 01:04:08.401900 | orchestrator | Tuesday 02 September 2025 01:03:02 +0000 (0:00:02.531) 0:00:44.503 ***** 2025-09-02 01:04:08.401911 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:04:08.401923 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:04:08.401933 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:04:08.401944 | orchestrator | 2025-09-02 01:04:08.401955 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-02 01:04:08.401966 | orchestrator | Tuesday 02 September 2025 01:03:02 +0000 (0:00:00.339) 0:00:44.842 ***** 2025-09-02 01:04:08.401977 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:04:08.401988 | orchestrator | 2025-09-02 01:04:08.401999 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-02 01:04:08.402010 | orchestrator | Tuesday 02 September 2025 01:03:03 +0000 (0:00:00.723) 0:00:45.566 ***** 2025-09-02 01:04:08.402082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.402103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.402116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.402145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.402157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.402169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.402180 | orchestrator | 2025-09-02 01:04:08.402191 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-02 01:04:08.402202 | orchestrator | Tuesday 02 September 2025 01:03:05 +0000 (0:00:02.501) 0:00:48.068 ***** 2025-09-02 01:04:08.402221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 01:04:08.402233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:04:08.402253 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:04:08.402269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 01:04:08.402281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:04:08.402292 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:04:08.402304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 01:04:08.402323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:04:08.402342 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:04:08.402353 | orchestrator | 2025-09-02 01:04:08.402364 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-02 01:04:08.402375 | orchestrator | Tuesday 02 September 2025 01:03:06 +0000 (0:00:00.609) 0:00:48.677 ***** 2025-09-02 01:04:08.402397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 01:04:08.402409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:04:08.402421 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:04:08.402432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 01:04:08.402444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 01:04:08.402462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:04:08.402485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:04:08.402497 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:04:08.402509 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:04:08.402551 | orchestrator | 2025-09-02 01:04:08.402571 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-02 01:04:08.402590 | orchestrator | Tuesday 02 September 2025 01:03:07 +0000 (0:00:01.038) 0:00:49.716 ***** 2025-09-02 01:04:08.402604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.402622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.402652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.402684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.402703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.402715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.402726 | orchestrator | 2025-09-02 01:04:08.402737 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-02 01:04:08.402748 | orchestrator | Tuesday 02 September 2025 01:03:10 +0000 (0:00:02.994) 0:00:52.710 ***** 2025-09-02 01:04:08.402759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.402778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.402796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.402813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.402824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.402836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.402847 | orchestrator | 2025-09-02 01:04:08.402858 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-02 01:04:08.402876 | orchestrator | Tuesday 02 September 2025 01:03:16 +0000 (0:00:05.666) 0:00:58.377 ***** 2025-09-02 01:04:08.402894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 01:04:08.402905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:04:08.402916 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:04:08.402933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 01:04:08.402945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:04:08.402956 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:04:08.402968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-02 01:04:08.402993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:04:08.403005 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:04:08.403016 | orchestrator | 2025-09-02 01:04:08.403027 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-02 01:04:08.403038 | orchestrator | Tuesday 02 September 2025 01:03:16 +0000 (0:00:00.775) 0:00:59.152 ***** 2025-09-02 01:04:08.403054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.403067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.403078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-02 01:04:08.403096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.403115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.403131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:04:08.403143 | orchestrator | 2025-09-02 01:04:08.403154 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-02 01:04:08.403165 | orchestrator | Tuesday 02 September 2025 01:03:19 +0000 (0:00:02.683) 0:01:01.835 ***** 2025-09-02 01:04:08.403176 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:04:08.403187 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:04:08.403198 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:04:08.403209 | orchestrator | 2025-09-02 01:04:08.403220 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-02 01:04:08.403231 | orchestrator | Tuesday 02 September 2025 01:03:19 +0000 (0:00:00.335) 0:01:02.171 ***** 2025-09-02 01:04:08.403242 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:04:08.403253 | orchestrator | 2025-09-02 01:04:08.403264 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-02 01:04:08.403275 | orchestrator | Tuesday 02 September 2025 01:03:22 +0000 (0:00:02.208) 0:01:04.380 ***** 2025-09-02 01:04:08.403286 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:04:08.403297 | orchestrator | 2025-09-02 01:04:08.403309 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-02 01:04:08.403320 | orchestrator | Tuesday 02 September 2025 01:03:24 +0000 (0:00:02.298) 0:01:06.678 ***** 2025-09-02 01:04:08.403331 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:04:08.403342 | orchestrator | 2025-09-02 01:04:08.403353 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-02 01:04:08.403364 | orchestrator | Tuesday 02 September 2025 01:03:41 +0000 (0:00:16.818) 0:01:23.496 ***** 2025-09-02 01:04:08.403381 | orchestrator | 2025-09-02 01:04:08.403392 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-02 01:04:08.403403 | orchestrator | Tuesday 02 September 2025 01:03:41 +0000 (0:00:00.081) 0:01:23.578 ***** 2025-09-02 01:04:08.403414 | orchestrator | 2025-09-02 01:04:08.403425 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-02 01:04:08.403436 | orchestrator | Tuesday 02 September 2025 01:03:41 +0000 (0:00:00.083) 0:01:23.662 ***** 2025-09-02 01:04:08.403447 | orchestrator | 2025-09-02 01:04:08.403458 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-02 01:04:08.403469 | orchestrator | Tuesday 02 September 2025 01:03:41 +0000 (0:00:00.084) 0:01:23.746 ***** 2025-09-02 01:04:08.403480 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:04:08.403491 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:04:08.403502 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:04:08.403581 | orchestrator | 2025-09-02 01:04:08.403595 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-02 01:04:08.403606 | orchestrator | Tuesday 02 September 2025 01:03:55 +0000 (0:00:14.317) 0:01:38.063 ***** 2025-09-02 01:04:08.403617 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:04:08.403628 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:04:08.403639 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:04:08.403656 | orchestrator | 2025-09-02 01:04:08.403675 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:04:08.403696 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-02 01:04:08.403717 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-02 01:04:08.403728 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-02 01:04:08.403739 | orchestrator | 2025-09-02 01:04:08.403750 | orchestrator | 2025-09-02 01:04:08.403761 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:04:08.403772 | orchestrator | Tuesday 02 September 2025 01:04:06 +0000 (0:00:10.502) 0:01:48.566 ***** 2025-09-02 01:04:08.403783 | orchestrator | =============================================================================== 2025-09-02 01:04:08.403794 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.82s 2025-09-02 01:04:08.403812 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.32s 2025-09-02 01:04:08.403824 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.50s 2025-09-02 01:04:08.403835 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.75s 2025-09-02 01:04:08.403845 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.67s 2025-09-02 01:04:08.403856 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.43s 2025-09-02 01:04:08.403867 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.07s 2025-09-02 01:04:08.403878 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.03s 2025-09-02 01:04:08.403889 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.90s 2025-09-02 01:04:08.403900 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.60s 2025-09-02 01:04:08.403910 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.44s 2025-09-02 01:04:08.403921 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.43s 2025-09-02 01:04:08.403932 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.34s 2025-09-02 01:04:08.403943 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.99s 2025-09-02 01:04:08.403962 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.68s 2025-09-02 01:04:08.403981 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.53s 2025-09-02 01:04:08.403993 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.50s 2025-09-02 01:04:08.404004 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.30s 2025-09-02 01:04:08.404014 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.21s 2025-09-02 01:04:08.404025 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.77s 2025-09-02 01:04:08.404036 | orchestrator | 2025-09-02 01:04:08 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:08.404047 | orchestrator | 2025-09-02 01:04:08 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:08.404058 | orchestrator | 2025-09-02 01:04:08 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:11.445640 | orchestrator | 2025-09-02 01:04:11 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:11.447589 | orchestrator | 2025-09-02 01:04:11 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:11.448160 | orchestrator | 2025-09-02 01:04:11 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:11.448841 | orchestrator | 2025-09-02 01:04:11 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:11.448947 | orchestrator | 2025-09-02 01:04:11 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:14.473479 | orchestrator | 2025-09-02 01:04:14 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:14.473624 | orchestrator | 2025-09-02 01:04:14 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:14.474162 | orchestrator | 2025-09-02 01:04:14 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:14.475956 | orchestrator | 2025-09-02 01:04:14 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:14.476049 | orchestrator | 2025-09-02 01:04:14 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:17.503754 | orchestrator | 2025-09-02 01:04:17 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:17.503824 | orchestrator | 2025-09-02 01:04:17 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:17.503836 | orchestrator | 2025-09-02 01:04:17 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:17.503847 | orchestrator | 2025-09-02 01:04:17 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:17.503858 | orchestrator | 2025-09-02 01:04:17 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:20.533840 | orchestrator | 2025-09-02 01:04:20 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:20.534270 | orchestrator | 2025-09-02 01:04:20 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:20.535212 | orchestrator | 2025-09-02 01:04:20 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:20.535956 | orchestrator | 2025-09-02 01:04:20 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:20.535977 | orchestrator | 2025-09-02 01:04:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:23.566328 | orchestrator | 2025-09-02 01:04:23 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:23.566621 | orchestrator | 2025-09-02 01:04:23 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:23.567423 | orchestrator | 2025-09-02 01:04:23 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:23.568368 | orchestrator | 2025-09-02 01:04:23 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:23.568391 | orchestrator | 2025-09-02 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:26.611298 | orchestrator | 2025-09-02 01:04:26 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:26.611716 | orchestrator | 2025-09-02 01:04:26 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:26.612635 | orchestrator | 2025-09-02 01:04:26 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:26.613480 | orchestrator | 2025-09-02 01:04:26 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:26.613577 | orchestrator | 2025-09-02 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:29.653141 | orchestrator | 2025-09-02 01:04:29 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:29.653640 | orchestrator | 2025-09-02 01:04:29 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:29.654338 | orchestrator | 2025-09-02 01:04:29 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:29.655318 | orchestrator | 2025-09-02 01:04:29 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:29.655342 | orchestrator | 2025-09-02 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:32.699817 | orchestrator | 2025-09-02 01:04:32 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:32.701043 | orchestrator | 2025-09-02 01:04:32 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:32.701624 | orchestrator | 2025-09-02 01:04:32 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:32.702941 | orchestrator | 2025-09-02 01:04:32 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:32.702959 | orchestrator | 2025-09-02 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:35.736468 | orchestrator | 2025-09-02 01:04:35 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:35.737056 | orchestrator | 2025-09-02 01:04:35 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:35.737749 | orchestrator | 2025-09-02 01:04:35 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:35.738722 | orchestrator | 2025-09-02 01:04:35 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:35.738742 | orchestrator | 2025-09-02 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:38.768286 | orchestrator | 2025-09-02 01:04:38 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:38.768540 | orchestrator | 2025-09-02 01:04:38 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:38.769386 | orchestrator | 2025-09-02 01:04:38 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:38.771016 | orchestrator | 2025-09-02 01:04:38 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:38.771069 | orchestrator | 2025-09-02 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:41.801042 | orchestrator | 2025-09-02 01:04:41 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:41.801151 | orchestrator | 2025-09-02 01:04:41 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:41.801788 | orchestrator | 2025-09-02 01:04:41 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:41.802448 | orchestrator | 2025-09-02 01:04:41 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:41.802470 | orchestrator | 2025-09-02 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:44.831347 | orchestrator | 2025-09-02 01:04:44 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:44.831662 | orchestrator | 2025-09-02 01:04:44 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:44.832231 | orchestrator | 2025-09-02 01:04:44 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:44.832830 | orchestrator | 2025-09-02 01:04:44 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:44.832853 | orchestrator | 2025-09-02 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:47.855302 | orchestrator | 2025-09-02 01:04:47 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:47.856391 | orchestrator | 2025-09-02 01:04:47 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:47.857195 | orchestrator | 2025-09-02 01:04:47 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:47.858385 | orchestrator | 2025-09-02 01:04:47 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:47.858469 | orchestrator | 2025-09-02 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:50.893337 | orchestrator | 2025-09-02 01:04:50 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:50.893806 | orchestrator | 2025-09-02 01:04:50 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:50.894721 | orchestrator | 2025-09-02 01:04:50 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:50.895602 | orchestrator | 2025-09-02 01:04:50 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:50.895625 | orchestrator | 2025-09-02 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:53.923748 | orchestrator | 2025-09-02 01:04:53 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:53.925889 | orchestrator | 2025-09-02 01:04:53 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:53.926354 | orchestrator | 2025-09-02 01:04:53 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:53.926928 | orchestrator | 2025-09-02 01:04:53 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:53.926951 | orchestrator | 2025-09-02 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:04:56.959512 | orchestrator | 2025-09-02 01:04:56 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:04:56.959749 | orchestrator | 2025-09-02 01:04:56 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:04:56.960338 | orchestrator | 2025-09-02 01:04:56 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:04:56.960984 | orchestrator | 2025-09-02 01:04:56 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:04:56.961007 | orchestrator | 2025-09-02 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:00.017100 | orchestrator | 2025-09-02 01:05:00 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:00.017620 | orchestrator | 2025-09-02 01:05:00 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:00.020585 | orchestrator | 2025-09-02 01:05:00 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:00.022197 | orchestrator | 2025-09-02 01:05:00 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:00.022219 | orchestrator | 2025-09-02 01:05:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:03.059760 | orchestrator | 2025-09-02 01:05:03 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:03.060406 | orchestrator | 2025-09-02 01:05:03 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:03.063752 | orchestrator | 2025-09-02 01:05:03 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:03.066181 | orchestrator | 2025-09-02 01:05:03 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:03.066686 | orchestrator | 2025-09-02 01:05:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:06.094215 | orchestrator | 2025-09-02 01:05:06 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:06.096786 | orchestrator | 2025-09-02 01:05:06 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:06.097797 | orchestrator | 2025-09-02 01:05:06 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:06.100343 | orchestrator | 2025-09-02 01:05:06 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:06.100368 | orchestrator | 2025-09-02 01:05:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:09.142563 | orchestrator | 2025-09-02 01:05:09 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:09.146250 | orchestrator | 2025-09-02 01:05:09 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:09.147338 | orchestrator | 2025-09-02 01:05:09 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:09.148185 | orchestrator | 2025-09-02 01:05:09 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:09.148207 | orchestrator | 2025-09-02 01:05:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:12.191405 | orchestrator | 2025-09-02 01:05:12 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:12.192934 | orchestrator | 2025-09-02 01:05:12 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:12.194675 | orchestrator | 2025-09-02 01:05:12 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:12.199636 | orchestrator | 2025-09-02 01:05:12 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:12.199706 | orchestrator | 2025-09-02 01:05:12 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:15.252757 | orchestrator | 2025-09-02 01:05:15 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:15.254858 | orchestrator | 2025-09-02 01:05:15 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:15.256692 | orchestrator | 2025-09-02 01:05:15 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:15.259604 | orchestrator | 2025-09-02 01:05:15 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:15.259644 | orchestrator | 2025-09-02 01:05:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:18.305837 | orchestrator | 2025-09-02 01:05:18 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:18.306374 | orchestrator | 2025-09-02 01:05:18 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:18.307555 | orchestrator | 2025-09-02 01:05:18 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:18.308864 | orchestrator | 2025-09-02 01:05:18 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:18.308888 | orchestrator | 2025-09-02 01:05:18 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:21.353684 | orchestrator | 2025-09-02 01:05:21 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:21.355160 | orchestrator | 2025-09-02 01:05:21 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:21.356753 | orchestrator | 2025-09-02 01:05:21 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:21.357880 | orchestrator | 2025-09-02 01:05:21 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:21.358191 | orchestrator | 2025-09-02 01:05:21 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:24.406109 | orchestrator | 2025-09-02 01:05:24 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:24.407896 | orchestrator | 2025-09-02 01:05:24 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:24.409685 | orchestrator | 2025-09-02 01:05:24 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:24.411194 | orchestrator | 2025-09-02 01:05:24 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:24.411220 | orchestrator | 2025-09-02 01:05:24 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:27.456028 | orchestrator | 2025-09-02 01:05:27 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:27.458535 | orchestrator | 2025-09-02 01:05:27 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:27.461127 | orchestrator | 2025-09-02 01:05:27 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:27.462645 | orchestrator | 2025-09-02 01:05:27 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:27.462668 | orchestrator | 2025-09-02 01:05:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:30.516904 | orchestrator | 2025-09-02 01:05:30 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:30.519129 | orchestrator | 2025-09-02 01:05:30 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:30.520520 | orchestrator | 2025-09-02 01:05:30 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:30.522268 | orchestrator | 2025-09-02 01:05:30 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:30.522294 | orchestrator | 2025-09-02 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:33.579434 | orchestrator | 2025-09-02 01:05:33 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:33.581993 | orchestrator | 2025-09-02 01:05:33 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:33.583816 | orchestrator | 2025-09-02 01:05:33 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:33.585746 | orchestrator | 2025-09-02 01:05:33 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:33.586310 | orchestrator | 2025-09-02 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:36.629877 | orchestrator | 2025-09-02 01:05:36 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:36.631418 | orchestrator | 2025-09-02 01:05:36 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:36.633863 | orchestrator | 2025-09-02 01:05:36 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:36.636583 | orchestrator | 2025-09-02 01:05:36 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:36.636627 | orchestrator | 2025-09-02 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:39.676594 | orchestrator | 2025-09-02 01:05:39 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:39.679203 | orchestrator | 2025-09-02 01:05:39 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:39.679587 | orchestrator | 2025-09-02 01:05:39 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:39.682374 | orchestrator | 2025-09-02 01:05:39 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:39.682399 | orchestrator | 2025-09-02 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:42.713042 | orchestrator | 2025-09-02 01:05:42 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:42.713682 | orchestrator | 2025-09-02 01:05:42 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:42.714813 | orchestrator | 2025-09-02 01:05:42 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:42.715785 | orchestrator | 2025-09-02 01:05:42 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:42.715863 | orchestrator | 2025-09-02 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:45.760081 | orchestrator | 2025-09-02 01:05:45 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:45.761662 | orchestrator | 2025-09-02 01:05:45 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:45.763764 | orchestrator | 2025-09-02 01:05:45 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:45.765288 | orchestrator | 2025-09-02 01:05:45 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:45.765679 | orchestrator | 2025-09-02 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:48.807314 | orchestrator | 2025-09-02 01:05:48 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:48.809660 | orchestrator | 2025-09-02 01:05:48 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:48.813526 | orchestrator | 2025-09-02 01:05:48 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:48.816542 | orchestrator | 2025-09-02 01:05:48 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:48.817756 | orchestrator | 2025-09-02 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:51.860825 | orchestrator | 2025-09-02 01:05:51 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:51.861813 | orchestrator | 2025-09-02 01:05:51 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:51.865701 | orchestrator | 2025-09-02 01:05:51 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:51.868721 | orchestrator | 2025-09-02 01:05:51 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:51.869078 | orchestrator | 2025-09-02 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:54.923375 | orchestrator | 2025-09-02 01:05:54 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:54.925675 | orchestrator | 2025-09-02 01:05:54 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:54.928022 | orchestrator | 2025-09-02 01:05:54 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:54.930521 | orchestrator | 2025-09-02 01:05:54 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:54.930639 | orchestrator | 2025-09-02 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:05:57.994180 | orchestrator | 2025-09-02 01:05:57 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:05:58.005480 | orchestrator | 2025-09-02 01:05:58 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:05:58.010149 | orchestrator | 2025-09-02 01:05:58 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:05:58.012974 | orchestrator | 2025-09-02 01:05:58 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:05:58.012997 | orchestrator | 2025-09-02 01:05:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:01.056900 | orchestrator | 2025-09-02 01:06:01 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:01.059155 | orchestrator | 2025-09-02 01:06:01 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:01.061836 | orchestrator | 2025-09-02 01:06:01 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:01.064404 | orchestrator | 2025-09-02 01:06:01 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:06:01.064812 | orchestrator | 2025-09-02 01:06:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:04.122129 | orchestrator | 2025-09-02 01:06:04 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:04.124152 | orchestrator | 2025-09-02 01:06:04 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:04.126847 | orchestrator | 2025-09-02 01:06:04 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:04.128551 | orchestrator | 2025-09-02 01:06:04 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:06:04.128925 | orchestrator | 2025-09-02 01:06:04 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:07.178958 | orchestrator | 2025-09-02 01:06:07 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:07.182984 | orchestrator | 2025-09-02 01:06:07 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:07.187674 | orchestrator | 2025-09-02 01:06:07 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:07.189141 | orchestrator | 2025-09-02 01:06:07 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:06:07.189171 | orchestrator | 2025-09-02 01:06:07 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:10.241845 | orchestrator | 2025-09-02 01:06:10 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:10.245466 | orchestrator | 2025-09-02 01:06:10 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:10.246581 | orchestrator | 2025-09-02 01:06:10 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:10.248126 | orchestrator | 2025-09-02 01:06:10 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:06:10.248754 | orchestrator | 2025-09-02 01:06:10 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:13.297545 | orchestrator | 2025-09-02 01:06:13 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:13.301199 | orchestrator | 2025-09-02 01:06:13 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:13.305314 | orchestrator | 2025-09-02 01:06:13 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:13.307652 | orchestrator | 2025-09-02 01:06:13 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:06:13.308069 | orchestrator | 2025-09-02 01:06:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:16.356618 | orchestrator | 2025-09-02 01:06:16 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:16.357799 | orchestrator | 2025-09-02 01:06:16 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:16.360686 | orchestrator | 2025-09-02 01:06:16 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:16.360852 | orchestrator | 2025-09-02 01:06:16 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state STARTED 2025-09-02 01:06:16.361140 | orchestrator | 2025-09-02 01:06:16 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:19.398720 | orchestrator | 2025-09-02 01:06:19 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:19.399471 | orchestrator | 2025-09-02 01:06:19 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:19.400735 | orchestrator | 2025-09-02 01:06:19 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:19.401829 | orchestrator | 2025-09-02 01:06:19 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:19.403658 | orchestrator | 2025-09-02 01:06:19 | INFO  | Task 28f7d92d-2a6e-4096-a22f-8fc278231931 is in state SUCCESS 2025-09-02 01:06:19.403700 | orchestrator | 2025-09-02 01:06:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:19.405225 | orchestrator | 2025-09-02 01:06:19.405265 | orchestrator | 2025-09-02 01:06:19.405277 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:06:19.405308 | orchestrator | 2025-09-02 01:06:19.405330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:06:19.405342 | orchestrator | Tuesday 02 September 2025 01:03:18 +0000 (0:00:00.260) 0:00:00.261 ***** 2025-09-02 01:06:19.405353 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:06:19.405366 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:06:19.405404 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:06:19.405452 | orchestrator | 2025-09-02 01:06:19.405464 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:06:19.405476 | orchestrator | Tuesday 02 September 2025 01:03:18 +0000 (0:00:00.359) 0:00:00.620 ***** 2025-09-02 01:06:19.405487 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-02 01:06:19.405499 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-02 01:06:19.405509 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-02 01:06:19.405520 | orchestrator | 2025-09-02 01:06:19.405531 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-02 01:06:19.405542 | orchestrator | 2025-09-02 01:06:19.405553 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-02 01:06:19.405564 | orchestrator | Tuesday 02 September 2025 01:03:19 +0000 (0:00:00.412) 0:00:01.032 ***** 2025-09-02 01:06:19.405575 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:06:19.405586 | orchestrator | 2025-09-02 01:06:19.405597 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-02 01:06:19.405608 | orchestrator | Tuesday 02 September 2025 01:03:19 +0000 (0:00:00.566) 0:00:01.598 ***** 2025-09-02 01:06:19.405619 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-02 01:06:19.405630 | orchestrator | 2025-09-02 01:06:19.405641 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-02 01:06:19.405652 | orchestrator | Tuesday 02 September 2025 01:03:23 +0000 (0:00:03.446) 0:00:05.045 ***** 2025-09-02 01:06:19.405663 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-02 01:06:19.405674 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-02 01:06:19.405685 | orchestrator | 2025-09-02 01:06:19.405696 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-02 01:06:19.405707 | orchestrator | Tuesday 02 September 2025 01:03:29 +0000 (0:00:06.660) 0:00:11.705 ***** 2025-09-02 01:06:19.405718 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-02 01:06:19.405730 | orchestrator | 2025-09-02 01:06:19.405741 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-02 01:06:19.405752 | orchestrator | Tuesday 02 September 2025 01:03:33 +0000 (0:00:03.205) 0:00:14.910 ***** 2025-09-02 01:06:19.405763 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:06:19.405774 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-02 01:06:19.405785 | orchestrator | 2025-09-02 01:06:19.405796 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-02 01:06:19.405807 | orchestrator | Tuesday 02 September 2025 01:03:37 +0000 (0:00:04.065) 0:00:18.976 ***** 2025-09-02 01:06:19.405818 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-02 01:06:19.405828 | orchestrator | 2025-09-02 01:06:19.405840 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-02 01:06:19.405853 | orchestrator | Tuesday 02 September 2025 01:03:40 +0000 (0:00:03.367) 0:00:22.344 ***** 2025-09-02 01:06:19.405866 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-02 01:06:19.405879 | orchestrator | 2025-09-02 01:06:19.405892 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-02 01:06:19.405905 | orchestrator | Tuesday 02 September 2025 01:03:44 +0000 (0:00:04.245) 0:00:26.589 ***** 2025-09-02 01:06:19.405955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.405983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.406005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.406081 | orchestrator | 2025-09-02 01:06:19.406098 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-02 01:06:19.406111 | orchestrator | Tuesday 02 September 2025 01:03:48 +0000 (0:00:03.732) 0:00:30.322 ***** 2025-09-02 01:06:19.406124 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:06:19.406137 | orchestrator | 2025-09-02 01:06:19.406158 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-02 01:06:19.406172 | orchestrator | Tuesday 02 September 2025 01:03:49 +0000 (0:00:00.721) 0:00:31.043 ***** 2025-09-02 01:06:19.406185 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:19.406199 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:06:19.406210 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:06:19.406221 | orchestrator | 2025-09-02 01:06:19.406232 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-02 01:06:19.406242 | orchestrator | Tuesday 02 September 2025 01:03:52 +0000 (0:00:03.749) 0:00:34.792 ***** 2025-09-02 01:06:19.406253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-02 01:06:19.406264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-02 01:06:19.406275 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-02 01:06:19.406286 | orchestrator | 2025-09-02 01:06:19.406297 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-02 01:06:19.406308 | orchestrator | Tuesday 02 September 2025 01:03:54 +0000 (0:00:01.540) 0:00:36.333 ***** 2025-09-02 01:06:19.406318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-02 01:06:19.406329 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-02 01:06:19.406340 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-02 01:06:19.406351 | orchestrator | 2025-09-02 01:06:19.406362 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-02 01:06:19.406373 | orchestrator | Tuesday 02 September 2025 01:03:55 +0000 (0:00:01.190) 0:00:37.524 ***** 2025-09-02 01:06:19.406383 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:06:19.406395 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:06:19.406405 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:06:19.406463 | orchestrator | 2025-09-02 01:06:19.406481 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-02 01:06:19.406500 | orchestrator | Tuesday 02 September 2025 01:03:56 +0000 (0:00:00.670) 0:00:38.195 ***** 2025-09-02 01:06:19.406518 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.406536 | orchestrator | 2025-09-02 01:06:19.406547 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-02 01:06:19.406558 | orchestrator | Tuesday 02 September 2025 01:03:56 +0000 (0:00:00.331) 0:00:38.527 ***** 2025-09-02 01:06:19.406568 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.406579 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.406590 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.406609 | orchestrator | 2025-09-02 01:06:19.406620 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-02 01:06:19.406630 | orchestrator | Tuesday 02 September 2025 01:03:57 +0000 (0:00:00.372) 0:00:38.899 ***** 2025-09-02 01:06:19.406641 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:06:19.406652 | orchestrator | 2025-09-02 01:06:19.406663 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-02 01:06:19.406674 | orchestrator | Tuesday 02 September 2025 01:03:57 +0000 (0:00:00.787) 0:00:39.687 ***** 2025-09-02 01:06:19.406699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.406714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.406739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.406752 | orchestrator | 2025-09-02 01:06:19.406762 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-02 01:06:19.406773 | orchestrator | Tuesday 02 September 2025 01:04:02 +0000 (0:00:04.801) 0:00:44.488 ***** 2025-09-02 01:06:19.406793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-02 01:06:19.406806 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.406818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-02 01:06:19.406838 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.406862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-02 01:06:19.406875 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.406886 | orchestrator | 2025-09-02 01:06:19.406897 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-02 01:06:19.406908 | orchestrator | Tuesday 02 September 2025 01:04:05 +0000 (0:00:03.275) 0:00:47.763 ***** 2025-09-02 01:06:19.406919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-02 01:06:19.406938 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.406962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-02 01:06:19.406974 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.406986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-02 01:06:19.407013 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.407024 | orchestrator | 2025-09-02 01:06:19.407035 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-02 01:06:19.407046 | orchestrator | Tuesday 02 September 2025 01:04:09 +0000 (0:00:03.687) 0:00:51.450 ***** 2025-09-02 01:06:19.407057 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.407067 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.407078 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.407089 | orchestrator | 2025-09-02 01:06:19.407099 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-02 01:06:19.407110 | orchestrator | Tuesday 02 September 2025 01:04:12 +0000 (0:00:03.412) 0:00:54.863 ***** 2025-09-02 01:06:19.407131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.407144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.407169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.407181 | orchestrator | 2025-09-02 01:06:19.407192 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-02 01:06:19.407203 | orchestrator | Tuesday 02 September 2025 01:04:17 +0000 (0:00:04.484) 0:00:59.347 ***** 2025-09-02 01:06:19.407214 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:06:19.407224 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:19.407235 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:06:19.407245 | orchestrator | 2025-09-02 01:06:19.407256 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-02 01:06:19.407267 | orchestrator | Tuesday 02 September 2025 01:04:25 +0000 (0:00:07.692) 0:01:07.039 ***** 2025-09-02 01:06:19.407278 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.407289 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.407300 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.407310 | orchestrator | 2025-09-02 01:06:19.407321 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-02 01:06:19.407338 | orchestrator | Tuesday 02 September 2025 01:04:29 +0000 (0:00:04.133) 0:01:11.172 ***** 2025-09-02 01:06:19.407349 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.407359 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.407370 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.407381 | orchestrator | 2025-09-02 01:06:19.407392 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-02 01:06:19.407436 | orchestrator | Tuesday 02 September 2025 01:04:33 +0000 (0:00:04.109) 0:01:15.282 ***** 2025-09-02 01:06:19.407449 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.407460 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.407470 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.407481 | orchestrator | 2025-09-02 01:06:19.407492 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-02 01:06:19.407503 | orchestrator | Tuesday 02 September 2025 01:04:38 +0000 (0:00:04.771) 0:01:20.053 ***** 2025-09-02 01:06:19.407514 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.407525 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.407535 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.407546 | orchestrator | 2025-09-02 01:06:19.407557 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-02 01:06:19.407568 | orchestrator | Tuesday 02 September 2025 01:04:42 +0000 (0:00:04.553) 0:01:24.606 ***** 2025-09-02 01:06:19.407579 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.407589 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.407600 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.407611 | orchestrator | 2025-09-02 01:06:19.407622 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-02 01:06:19.407633 | orchestrator | Tuesday 02 September 2025 01:04:43 +0000 (0:00:00.304) 0:01:24.911 ***** 2025-09-02 01:06:19.407643 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-02 01:06:19.407654 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.407665 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-02 01:06:19.407676 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.407687 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-02 01:06:19.407698 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.407709 | orchestrator | 2025-09-02 01:06:19.407719 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-02 01:06:19.407730 | orchestrator | Tuesday 02 September 2025 01:04:46 +0000 (0:00:03.757) 0:01:28.668 ***** 2025-09-02 01:06:19.407747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.407775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.407788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-02 01:06:19.407800 | orchestrator | 2025-09-02 01:06:19.407811 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-02 01:06:19.407822 | orchestrator | Tuesday 02 September 2025 01:04:54 +0000 (0:00:07.402) 0:01:36.071 ***** 2025-09-02 01:06:19.407833 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:19.407843 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:19.407854 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:19.407865 | orchestrator | 2025-09-02 01:06:19.407880 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-02 01:06:19.407891 | orchestrator | Tuesday 02 September 2025 01:04:54 +0000 (0:00:00.482) 0:01:36.553 ***** 2025-09-02 01:06:19.407908 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:19.407919 | orchestrator | 2025-09-02 01:06:19.407930 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-02 01:06:19.407941 | orchestrator | Tuesday 02 September 2025 01:04:56 +0000 (0:00:02.277) 0:01:38.831 ***** 2025-09-02 01:06:19.407952 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:19.407962 | orchestrator | 2025-09-02 01:06:19.407973 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-02 01:06:19.407984 | orchestrator | Tuesday 02 September 2025 01:04:59 +0000 (0:00:02.569) 0:01:41.401 ***** 2025-09-02 01:06:19.407994 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:19.408005 | orchestrator | 2025-09-02 01:06:19.408016 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-02 01:06:19.408026 | orchestrator | Tuesday 02 September 2025 01:05:01 +0000 (0:00:02.300) 0:01:43.701 ***** 2025-09-02 01:06:19.408037 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:19.408048 | orchestrator | 2025-09-02 01:06:19.408059 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-02 01:06:19.408069 | orchestrator | Tuesday 02 September 2025 01:05:35 +0000 (0:00:33.229) 0:02:16.931 ***** 2025-09-02 01:06:19.408080 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:19.408091 | orchestrator | 2025-09-02 01:06:19.408107 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-02 01:06:19.408119 | orchestrator | Tuesday 02 September 2025 01:05:37 +0000 (0:00:02.518) 0:02:19.450 ***** 2025-09-02 01:06:19.408129 | orchestrator | 2025-09-02 01:06:19.408140 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-02 01:06:19.408151 | orchestrator | Tuesday 02 September 2025 01:05:37 +0000 (0:00:00.059) 0:02:19.509 ***** 2025-09-02 01:06:19.408162 | orchestrator | 2025-09-02 01:06:19.408172 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-02 01:06:19.408183 | orchestrator | Tuesday 02 September 2025 01:05:37 +0000 (0:00:00.067) 0:02:19.577 ***** 2025-09-02 01:06:19.408194 | orchestrator | 2025-09-02 01:06:19.408205 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-02 01:06:19.408215 | orchestrator | Tuesday 02 September 2025 01:05:37 +0000 (0:00:00.076) 0:02:19.654 ***** 2025-09-02 01:06:19.408226 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:19.408237 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:06:19.408247 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:06:19.408258 | orchestrator | 2025-09-02 01:06:19.408269 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:06:19.408280 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-02 01:06:19.408292 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-02 01:06:19.408303 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-02 01:06:19.408314 | orchestrator | 2025-09-02 01:06:19.408325 | orchestrator | 2025-09-02 01:06:19.408336 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:06:19.408346 | orchestrator | Tuesday 02 September 2025 01:06:16 +0000 (0:00:38.532) 0:02:58.187 ***** 2025-09-02 01:06:19.408357 | orchestrator | =============================================================================== 2025-09-02 01:06:19.408368 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.53s 2025-09-02 01:06:19.408378 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 33.23s 2025-09-02 01:06:19.408389 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.69s 2025-09-02 01:06:19.408400 | orchestrator | glance : Check glance containers ---------------------------------------- 7.40s 2025-09-02 01:06:19.408556 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.66s 2025-09-02 01:06:19.408589 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.80s 2025-09-02 01:06:19.408601 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.77s 2025-09-02 01:06:19.408612 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.55s 2025-09-02 01:06:19.408623 | orchestrator | glance : Copying over config.json files for services -------------------- 4.48s 2025-09-02 01:06:19.408633 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.25s 2025-09-02 01:06:19.408644 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.13s 2025-09-02 01:06:19.408655 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.11s 2025-09-02 01:06:19.408665 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.07s 2025-09-02 01:06:19.408676 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.76s 2025-09-02 01:06:19.408687 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.75s 2025-09-02 01:06:19.408697 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.73s 2025-09-02 01:06:19.408708 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.69s 2025-09-02 01:06:19.408718 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.45s 2025-09-02 01:06:19.408729 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.41s 2025-09-02 01:06:19.408747 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.37s 2025-09-02 01:06:22.452822 | orchestrator | 2025-09-02 01:06:22 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:22.453441 | orchestrator | 2025-09-02 01:06:22 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:22.455609 | orchestrator | 2025-09-02 01:06:22 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:22.456356 | orchestrator | 2025-09-02 01:06:22 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:22.456386 | orchestrator | 2025-09-02 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:25.504604 | orchestrator | 2025-09-02 01:06:25 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:25.506808 | orchestrator | 2025-09-02 01:06:25 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:25.508563 | orchestrator | 2025-09-02 01:06:25 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:25.510306 | orchestrator | 2025-09-02 01:06:25 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:25.510336 | orchestrator | 2025-09-02 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:28.555612 | orchestrator | 2025-09-02 01:06:28 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:28.556936 | orchestrator | 2025-09-02 01:06:28 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:28.558214 | orchestrator | 2025-09-02 01:06:28 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:28.559754 | orchestrator | 2025-09-02 01:06:28 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:28.559777 | orchestrator | 2025-09-02 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:31.602116 | orchestrator | 2025-09-02 01:06:31 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:31.604294 | orchestrator | 2025-09-02 01:06:31 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:31.606197 | orchestrator | 2025-09-02 01:06:31 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:31.607372 | orchestrator | 2025-09-02 01:06:31 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:31.607560 | orchestrator | 2025-09-02 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:34.651554 | orchestrator | 2025-09-02 01:06:34 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:34.653372 | orchestrator | 2025-09-02 01:06:34 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:34.655731 | orchestrator | 2025-09-02 01:06:34 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:34.657459 | orchestrator | 2025-09-02 01:06:34 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:34.657801 | orchestrator | 2025-09-02 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:37.705204 | orchestrator | 2025-09-02 01:06:37 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:37.706615 | orchestrator | 2025-09-02 01:06:37 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:37.708310 | orchestrator | 2025-09-02 01:06:37 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:37.709817 | orchestrator | 2025-09-02 01:06:37 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state STARTED 2025-09-02 01:06:37.710057 | orchestrator | 2025-09-02 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:40.757628 | orchestrator | 2025-09-02 01:06:40 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:40.759305 | orchestrator | 2025-09-02 01:06:40 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:40.762669 | orchestrator | 2025-09-02 01:06:40 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:40.766533 | orchestrator | 2025-09-02 01:06:40 | INFO  | Task 37625813-2b84-4432-876b-dbb888dc866c is in state SUCCESS 2025-09-02 01:06:40.768784 | orchestrator | 2025-09-02 01:06:40.768817 | orchestrator | 2025-09-02 01:06:40.768829 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:06:40.768842 | orchestrator | 2025-09-02 01:06:40.768870 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:06:40.768883 | orchestrator | Tuesday 02 September 2025 01:04:11 +0000 (0:00:00.205) 0:00:00.205 ***** 2025-09-02 01:06:40.768894 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:06:40.768907 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:06:40.768918 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:06:40.769037 | orchestrator | 2025-09-02 01:06:40.769052 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:06:40.769063 | orchestrator | Tuesday 02 September 2025 01:04:11 +0000 (0:00:00.264) 0:00:00.470 ***** 2025-09-02 01:06:40.769075 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-02 01:06:40.769087 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-02 01:06:40.769098 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-02 01:06:40.769110 | orchestrator | 2025-09-02 01:06:40.769139 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-02 01:06:40.769162 | orchestrator | 2025-09-02 01:06:40.769174 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-02 01:06:40.769206 | orchestrator | Tuesday 02 September 2025 01:04:12 +0000 (0:00:00.334) 0:00:00.805 ***** 2025-09-02 01:06:40.769241 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:06:40.769254 | orchestrator | 2025-09-02 01:06:40.769265 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-02 01:06:40.769277 | orchestrator | Tuesday 02 September 2025 01:04:12 +0000 (0:00:00.382) 0:00:01.187 ***** 2025-09-02 01:06:40.769291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.769307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.769319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.769331 | orchestrator | 2025-09-02 01:06:40.769343 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-02 01:06:40.769354 | orchestrator | Tuesday 02 September 2025 01:04:13 +0000 (0:00:00.744) 0:00:01.932 ***** 2025-09-02 01:06:40.769365 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-02 01:06:40.769377 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-02 01:06:40.769391 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 01:06:40.769445 | orchestrator | 2025-09-02 01:06:40.769457 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-02 01:06:40.769470 | orchestrator | Tuesday 02 September 2025 01:04:14 +0000 (0:00:01.644) 0:00:03.577 ***** 2025-09-02 01:06:40.769482 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:06:40.769496 | orchestrator | 2025-09-02 01:06:40.769509 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-02 01:06:40.769541 | orchestrator | Tuesday 02 September 2025 01:04:15 +0000 (0:00:00.684) 0:00:04.261 ***** 2025-09-02 01:06:40.769575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.769600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.769615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.769628 | orchestrator | 2025-09-02 01:06:40.769641 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-02 01:06:40.769655 | orchestrator | Tuesday 02 September 2025 01:04:17 +0000 (0:00:01.603) 0:00:05.865 ***** 2025-09-02 01:06:40.769668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-02 01:06:40.769682 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:40.769695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-02 01:06:40.769708 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:40.769729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-02 01:06:40.769766 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:40.769779 | orchestrator | 2025-09-02 01:06:40.769790 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-02 01:06:40.769822 | orchestrator | Tuesday 02 September 2025 01:04:17 +0000 (0:00:00.351) 0:00:06.216 ***** 2025-09-02 01:06:40.769834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-02 01:06:40.769846 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:40.769857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-02 01:06:40.769869 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:40.769880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-02 01:06:40.769893 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:40.769904 | orchestrator | 2025-09-02 01:06:40.769915 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-02 01:06:40.769926 | orchestrator | Tuesday 02 September 2025 01:04:19 +0000 (0:00:01.623) 0:00:07.840 ***** 2025-09-02 01:06:40.769938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.769950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.769983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.769996 | orchestrator | 2025-09-02 01:06:40.770007 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-02 01:06:40.770066 | orchestrator | Tuesday 02 September 2025 01:04:20 +0000 (0:00:01.670) 0:00:09.511 ***** 2025-09-02 01:06:40.770081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.770093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.770105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.770116 | orchestrator | 2025-09-02 01:06:40.770128 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-02 01:06:40.770139 | orchestrator | Tuesday 02 September 2025 01:04:22 +0000 (0:00:01.794) 0:00:11.306 ***** 2025-09-02 01:06:40.770150 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:40.770161 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:40.770172 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:40.770183 | orchestrator | 2025-09-02 01:06:40.770194 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-02 01:06:40.770205 | orchestrator | Tuesday 02 September 2025 01:04:23 +0000 (0:00:00.977) 0:00:12.283 ***** 2025-09-02 01:06:40.770216 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-02 01:06:40.770235 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-02 01:06:40.770245 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-02 01:06:40.770256 | orchestrator | 2025-09-02 01:06:40.770267 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-02 01:06:40.770278 | orchestrator | Tuesday 02 September 2025 01:04:24 +0000 (0:00:01.321) 0:00:13.604 ***** 2025-09-02 01:06:40.770289 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-02 01:06:40.770301 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-02 01:06:40.770312 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-02 01:06:40.770323 | orchestrator | 2025-09-02 01:06:40.770334 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-02 01:06:40.770345 | orchestrator | Tuesday 02 September 2025 01:04:26 +0000 (0:00:01.425) 0:00:15.030 ***** 2025-09-02 01:06:40.770363 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 01:06:40.770375 | orchestrator | 2025-09-02 01:06:40.770391 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-02 01:06:40.770437 | orchestrator | Tuesday 02 September 2025 01:04:27 +0000 (0:00:01.436) 0:00:16.466 ***** 2025-09-02 01:06:40.770449 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-02 01:06:40.770460 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-02 01:06:40.770471 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:06:40.770482 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:06:40.770493 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:06:40.770504 | orchestrator | 2025-09-02 01:06:40.770514 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-02 01:06:40.770525 | orchestrator | Tuesday 02 September 2025 01:04:28 +0000 (0:00:00.982) 0:00:17.448 ***** 2025-09-02 01:06:40.770536 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:40.770547 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:40.770558 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:40.770569 | orchestrator | 2025-09-02 01:06:40.770580 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-02 01:06:40.770591 | orchestrator | Tuesday 02 September 2025 01:04:29 +0000 (0:00:00.775) 0:00:18.224 ***** 2025-09-02 01:06:40.770603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1851761, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5594969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1851761, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5594969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1851761, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5594969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1851777, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5678146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1851777, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5678146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1851777, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5678146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1851764, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5604968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1851764, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5604968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1851764, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5604968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1851778, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5694969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1851778, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5694969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1851778, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5694969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1851769, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5634968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1851769, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5634968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1851769, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5634968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1851774, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5666323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1851774, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5666323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1851774, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5666323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1851760, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5579758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1851760, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5579758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1851760, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5579758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1851762, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5594969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1851762, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5594969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.770987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1851762, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5594969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1851765, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5614967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1851765, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5614967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1851765, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5614967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1851771, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5644968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1851771, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5644968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1851771, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5644968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1851776, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.567562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1851776, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.567562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1851776, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.567562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1851763, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5604968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1851763, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5604968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1851763, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5604968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1851773, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5663233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1851773, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5663233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1851773, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5663233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1851770, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5644968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1851770, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5644968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1851770, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5644968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1851768, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5633044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1851768, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5633044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1851768, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5633044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1851767, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5627484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1851767, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5627484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1851767, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5627484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1851772, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5656314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1851772, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5656314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1851772, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5656314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1851766, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5621493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1851766, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5621493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1851766, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5621493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1851775, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5672603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1851775, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5672603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1851775, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5672603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1851801, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5967305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1851801, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5967305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1851801, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5967305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1851786, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.578497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1851786, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.578497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1851786, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.578497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1851783, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5724444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1851783, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5724444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1851783, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5724444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1851790, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5812025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1851790, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5812025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1851790, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5812025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1851780, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.570422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1851780, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.570422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1851780, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.570422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1851794, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5884972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1851794, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5884972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1851794, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5884972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1851791, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5854971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1851791, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5854971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1851791, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5854971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1851795, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5884972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1851795, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5884972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1851795, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5884972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1851799, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5954971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1851799, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5954971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1851799, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5954971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1851793, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.586497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1851793, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.586497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1851793, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.586497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.771997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1851788, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.579497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1851788, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.579497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1851788, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.579497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1851785, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.575497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1851785, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.575497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1851785, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.575497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1851787, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.579497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1851787, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.579497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1851787, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.579497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1851784, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.574497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1851784, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.574497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1851784, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.574497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1851789, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.580497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1851789, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.580497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1851789, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.580497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1851798, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5944972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1851798, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5944972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1851798, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5944972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1851797, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5914972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1851797, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5914972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1851797, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5914972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1851781, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5704968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1851781, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5704968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1851781, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5704968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1851782, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5715196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1851782, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5715196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1851782, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.5715196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1851792, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.586497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1851792, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.586497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1851792, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.586497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1851796, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.589497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1851796, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.589497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1851796, 'dev': 116, 'nlink': 1, 'atime': 1756771329.0, 'mtime': 1756771329.0, 'ctime': 1756773365.589497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-02 01:06:40.772675 | orchestrator | 2025-09-02 01:06:40.772687 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-02 01:06:40.772699 | orchestrator | Tuesday 02 September 2025 01:05:09 +0000 (0:00:40.498) 0:00:58.723 ***** 2025-09-02 01:06:40.772710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.772722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.772734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-02 01:06:40.772753 | orchestrator | 2025-09-02 01:06:40.772764 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-02 01:06:40.772776 | orchestrator | Tuesday 02 September 2025 01:05:10 +0000 (0:00:01.022) 0:00:59.745 ***** 2025-09-02 01:06:40.772787 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:40.772798 | orchestrator | 2025-09-02 01:06:40.772810 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-02 01:06:40.772821 | orchestrator | Tuesday 02 September 2025 01:05:13 +0000 (0:00:02.239) 0:01:01.985 ***** 2025-09-02 01:06:40.772832 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:40.772843 | orchestrator | 2025-09-02 01:06:40.772854 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-02 01:06:40.772865 | orchestrator | Tuesday 02 September 2025 01:05:15 +0000 (0:00:02.309) 0:01:04.294 ***** 2025-09-02 01:06:40.772875 | orchestrator | 2025-09-02 01:06:40.772887 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-02 01:06:40.772904 | orchestrator | Tuesday 02 September 2025 01:05:15 +0000 (0:00:00.063) 0:01:04.358 ***** 2025-09-02 01:06:40.772915 | orchestrator | 2025-09-02 01:06:40.772930 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-02 01:06:40.772942 | orchestrator | Tuesday 02 September 2025 01:05:15 +0000 (0:00:00.063) 0:01:04.421 ***** 2025-09-02 01:06:40.772953 | orchestrator | 2025-09-02 01:06:40.772964 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-02 01:06:40.772975 | orchestrator | Tuesday 02 September 2025 01:05:15 +0000 (0:00:00.245) 0:01:04.667 ***** 2025-09-02 01:06:40.772986 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:40.772997 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:40.773008 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:06:40.773019 | orchestrator | 2025-09-02 01:06:40.773030 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-02 01:06:40.773041 | orchestrator | Tuesday 02 September 2025 01:05:22 +0000 (0:00:06.935) 0:01:11.602 ***** 2025-09-02 01:06:40.773052 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:40.773063 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:40.773074 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-02 01:06:40.773086 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-02 01:06:40.773097 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-02 01:06:40.773108 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:06:40.773119 | orchestrator | 2025-09-02 01:06:40.773130 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-02 01:06:40.773141 | orchestrator | Tuesday 02 September 2025 01:06:01 +0000 (0:00:38.760) 0:01:50.363 ***** 2025-09-02 01:06:40.773152 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:40.773163 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:06:40.773174 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:06:40.773185 | orchestrator | 2025-09-02 01:06:40.773196 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-02 01:06:40.773207 | orchestrator | Tuesday 02 September 2025 01:06:32 +0000 (0:00:30.604) 0:02:20.968 ***** 2025-09-02 01:06:40.773218 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:06:40.773237 | orchestrator | 2025-09-02 01:06:40.773248 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-02 01:06:40.773260 | orchestrator | Tuesday 02 September 2025 01:06:34 +0000 (0:00:02.134) 0:02:23.103 ***** 2025-09-02 01:06:40.773273 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:40.773286 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:06:40.773299 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:06:40.773312 | orchestrator | 2025-09-02 01:06:40.773325 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-02 01:06:40.773339 | orchestrator | Tuesday 02 September 2025 01:06:34 +0000 (0:00:00.501) 0:02:23.604 ***** 2025-09-02 01:06:40.773353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-02 01:06:40.773369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-02 01:06:40.773384 | orchestrator | 2025-09-02 01:06:40.773418 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-02 01:06:40.773431 | orchestrator | Tuesday 02 September 2025 01:06:37 +0000 (0:00:02.332) 0:02:25.937 ***** 2025-09-02 01:06:40.773443 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:06:40.773456 | orchestrator | 2025-09-02 01:06:40.773469 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:06:40.773483 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-02 01:06:40.773497 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-02 01:06:40.773510 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-02 01:06:40.773524 | orchestrator | 2025-09-02 01:06:40.773536 | orchestrator | 2025-09-02 01:06:40.773549 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:06:40.773562 | orchestrator | Tuesday 02 September 2025 01:06:37 +0000 (0:00:00.278) 0:02:26.215 ***** 2025-09-02 01:06:40.773574 | orchestrator | =============================================================================== 2025-09-02 01:06:40.773587 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 40.50s 2025-09-02 01:06:40.773601 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.76s 2025-09-02 01:06:40.773615 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.60s 2025-09-02 01:06:40.773625 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.94s 2025-09-02 01:06:40.773636 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.33s 2025-09-02 01:06:40.773653 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.31s 2025-09-02 01:06:40.773669 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.24s 2025-09-02 01:06:40.773681 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.13s 2025-09-02 01:06:40.773692 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.79s 2025-09-02 01:06:40.773703 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.67s 2025-09-02 01:06:40.773714 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.64s 2025-09-02 01:06:40.773725 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.62s 2025-09-02 01:06:40.773743 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.60s 2025-09-02 01:06:40.773754 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.44s 2025-09-02 01:06:40.773765 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.43s 2025-09-02 01:06:40.773775 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.32s 2025-09-02 01:06:40.773786 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2025-09-02 01:06:40.773797 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.98s 2025-09-02 01:06:40.773808 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.98s 2025-09-02 01:06:40.773819 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.78s 2025-09-02 01:06:40.773830 | orchestrator | 2025-09-02 01:06:40 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:06:40.773841 | orchestrator | 2025-09-02 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:43.816217 | orchestrator | 2025-09-02 01:06:43 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:43.817243 | orchestrator | 2025-09-02 01:06:43 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:43.820565 | orchestrator | 2025-09-02 01:06:43 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:43.823455 | orchestrator | 2025-09-02 01:06:43 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:06:43.824222 | orchestrator | 2025-09-02 01:06:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:46.868857 | orchestrator | 2025-09-02 01:06:46 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:46.870680 | orchestrator | 2025-09-02 01:06:46 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:46.872807 | orchestrator | 2025-09-02 01:06:46 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:46.874440 | orchestrator | 2025-09-02 01:06:46 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:06:46.874606 | orchestrator | 2025-09-02 01:06:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:49.919523 | orchestrator | 2025-09-02 01:06:49 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:49.921359 | orchestrator | 2025-09-02 01:06:49 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:49.923121 | orchestrator | 2025-09-02 01:06:49 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:49.924623 | orchestrator | 2025-09-02 01:06:49 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:06:49.924646 | orchestrator | 2025-09-02 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:52.963784 | orchestrator | 2025-09-02 01:06:52 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:52.964446 | orchestrator | 2025-09-02 01:06:52 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:52.966188 | orchestrator | 2025-09-02 01:06:52 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:52.967248 | orchestrator | 2025-09-02 01:06:52 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:06:52.967512 | orchestrator | 2025-09-02 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:56.019576 | orchestrator | 2025-09-02 01:06:56 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:56.021834 | orchestrator | 2025-09-02 01:06:56 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:56.024764 | orchestrator | 2025-09-02 01:06:56 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:56.027504 | orchestrator | 2025-09-02 01:06:56 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:06:56.027576 | orchestrator | 2025-09-02 01:06:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:06:59.077209 | orchestrator | 2025-09-02 01:06:59 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:06:59.078479 | orchestrator | 2025-09-02 01:06:59 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:06:59.079578 | orchestrator | 2025-09-02 01:06:59 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:06:59.081291 | orchestrator | 2025-09-02 01:06:59 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:06:59.081312 | orchestrator | 2025-09-02 01:06:59 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:02.113700 | orchestrator | 2025-09-02 01:07:02 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state STARTED 2025-09-02 01:07:02.114769 | orchestrator | 2025-09-02 01:07:02 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:02.116888 | orchestrator | 2025-09-02 01:07:02 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:02.118990 | orchestrator | 2025-09-02 01:07:02 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:02.119234 | orchestrator | 2025-09-02 01:07:02 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:05.162612 | orchestrator | 2025-09-02 01:07:05 | INFO  | Task e3bdef6b-9d00-4c55-9ab4-6d0a11937e35 is in state SUCCESS 2025-09-02 01:07:05.164556 | orchestrator | 2025-09-02 01:07:05.164598 | orchestrator | 2025-09-02 01:07:05.164611 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:07:05.164624 | orchestrator | 2025-09-02 01:07:05.164636 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:07:05.164648 | orchestrator | Tuesday 02 September 2025 01:03:40 +0000 (0:00:00.305) 0:00:00.305 ***** 2025-09-02 01:07:05.164659 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:07:05.164708 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:07:05.164720 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:07:05.164745 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:07:05.164758 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:07:05.164769 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:07:05.164780 | orchestrator | 2025-09-02 01:07:05.164791 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:07:05.164880 | orchestrator | Tuesday 02 September 2025 01:03:41 +0000 (0:00:00.729) 0:00:01.035 ***** 2025-09-02 01:07:05.164893 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-02 01:07:05.164905 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-02 01:07:05.164941 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-02 01:07:05.164954 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-02 01:07:05.164966 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-02 01:07:05.164976 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-02 01:07:05.164987 | orchestrator | 2025-09-02 01:07:05.164999 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-02 01:07:05.165010 | orchestrator | 2025-09-02 01:07:05.165021 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-02 01:07:05.165055 | orchestrator | Tuesday 02 September 2025 01:03:41 +0000 (0:00:00.589) 0:00:01.624 ***** 2025-09-02 01:07:05.165066 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 01:07:05.165079 | orchestrator | 2025-09-02 01:07:05.165090 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-02 01:07:05.165103 | orchestrator | Tuesday 02 September 2025 01:03:43 +0000 (0:00:01.436) 0:00:03.061 ***** 2025-09-02 01:07:05.165118 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-02 01:07:05.165131 | orchestrator | 2025-09-02 01:07:05.165145 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-02 01:07:05.165158 | orchestrator | Tuesday 02 September 2025 01:03:46 +0000 (0:00:03.565) 0:00:06.627 ***** 2025-09-02 01:07:05.165171 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-02 01:07:05.165186 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-02 01:07:05.165199 | orchestrator | 2025-09-02 01:07:05.165213 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-02 01:07:05.165226 | orchestrator | Tuesday 02 September 2025 01:03:53 +0000 (0:00:06.562) 0:00:13.189 ***** 2025-09-02 01:07:05.165265 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-02 01:07:05.165279 | orchestrator | 2025-09-02 01:07:05.165293 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-02 01:07:05.165305 | orchestrator | Tuesday 02 September 2025 01:03:57 +0000 (0:00:03.792) 0:00:16.982 ***** 2025-09-02 01:07:05.165318 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:07:05.165330 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-02 01:07:05.165343 | orchestrator | 2025-09-02 01:07:05.165356 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-02 01:07:05.165368 | orchestrator | Tuesday 02 September 2025 01:04:01 +0000 (0:00:04.089) 0:00:21.071 ***** 2025-09-02 01:07:05.165402 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-02 01:07:05.165416 | orchestrator | 2025-09-02 01:07:05.165444 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-02 01:07:05.165457 | orchestrator | Tuesday 02 September 2025 01:04:05 +0000 (0:00:03.755) 0:00:24.826 ***** 2025-09-02 01:07:05.165468 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-02 01:07:05.165479 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-02 01:07:05.165490 | orchestrator | 2025-09-02 01:07:05.165501 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-02 01:07:05.165511 | orchestrator | Tuesday 02 September 2025 01:04:13 +0000 (0:00:08.158) 0:00:32.985 ***** 2025-09-02 01:07:05.165525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.165559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.165582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.165595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.165613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.165626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.165646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.165665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.165677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.165690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.165707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.165720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.165746 | orchestrator | 2025-09-02 01:07:05.165763 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-02 01:07:05.165775 | orchestrator | Tuesday 02 September 2025 01:04:15 +0000 (0:00:02.695) 0:00:35.680 ***** 2025-09-02 01:07:05.165786 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.165798 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:07:05.165809 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:07:05.165820 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:07:05.165831 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:07:05.165842 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:07:05.165853 | orchestrator | 2025-09-02 01:07:05.165864 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-02 01:07:05.165876 | orchestrator | Tuesday 02 September 2025 01:04:16 +0000 (0:00:00.673) 0:00:36.353 ***** 2025-09-02 01:07:05.165887 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.165898 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:07:05.165909 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:07:05.165920 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 01:07:05.165931 | orchestrator | 2025-09-02 01:07:05.165943 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-02 01:07:05.165954 | orchestrator | Tuesday 02 September 2025 01:04:17 +0000 (0:00:00.890) 0:00:37.244 ***** 2025-09-02 01:07:05.165965 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-02 01:07:05.165976 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-02 01:07:05.165987 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-02 01:07:05.165998 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-02 01:07:05.166009 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-02 01:07:05.166078 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-02 01:07:05.166091 | orchestrator | 2025-09-02 01:07:05.166102 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-02 01:07:05.166113 | orchestrator | Tuesday 02 September 2025 01:04:19 +0000 (0:00:02.276) 0:00:39.520 ***** 2025-09-02 01:07:05.166125 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-02 01:07:05.166145 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-02 01:07:05.166165 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-02 01:07:05.166185 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-02 01:07:05.166198 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-02 01:07:05.166210 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-02 01:07:05.166226 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-02 01:07:05.166245 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-02 01:07:05.166264 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-02 01:07:05.166276 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-02 01:07:05.166290 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-02 01:07:05.166314 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-02 01:07:05.166332 | orchestrator | 2025-09-02 01:07:05.166343 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-02 01:07:05.166355 | orchestrator | Tuesday 02 September 2025 01:04:24 +0000 (0:00:04.604) 0:00:44.125 ***** 2025-09-02 01:07:05.166366 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-02 01:07:05.166399 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-02 01:07:05.166411 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-02 01:07:05.166422 | orchestrator | 2025-09-02 01:07:05.166433 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-02 01:07:05.166445 | orchestrator | Tuesday 02 September 2025 01:04:26 +0000 (0:00:02.093) 0:00:46.219 ***** 2025-09-02 01:07:05.166456 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-02 01:07:05.166467 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-02 01:07:05.166477 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-02 01:07:05.166488 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-02 01:07:05.166499 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-02 01:07:05.166516 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-02 01:07:05.166528 | orchestrator | 2025-09-02 01:07:05.166539 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-02 01:07:05.166550 | orchestrator | Tuesday 02 September 2025 01:04:30 +0000 (0:00:03.639) 0:00:49.858 ***** 2025-09-02 01:07:05.166561 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-02 01:07:05.166595 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-02 01:07:05.166608 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-02 01:07:05.166619 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-02 01:07:05.166630 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-02 01:07:05.166641 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-02 01:07:05.166652 | orchestrator | 2025-09-02 01:07:05.166663 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-02 01:07:05.166673 | orchestrator | Tuesday 02 September 2025 01:04:31 +0000 (0:00:01.199) 0:00:51.058 ***** 2025-09-02 01:07:05.166684 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.166695 | orchestrator | 2025-09-02 01:07:05.166706 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-02 01:07:05.166717 | orchestrator | Tuesday 02 September 2025 01:04:31 +0000 (0:00:00.130) 0:00:51.188 ***** 2025-09-02 01:07:05.166728 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.166739 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:07:05.166750 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:07:05.166762 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:07:05.166772 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:07:05.166783 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:07:05.166795 | orchestrator | 2025-09-02 01:07:05.166806 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-02 01:07:05.166816 | orchestrator | Tuesday 02 September 2025 01:04:32 +0000 (0:00:00.947) 0:00:52.136 ***** 2025-09-02 01:07:05.166829 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 01:07:05.166841 | orchestrator | 2025-09-02 01:07:05.166852 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-02 01:07:05.166871 | orchestrator | Tuesday 02 September 2025 01:04:33 +0000 (0:00:01.456) 0:00:53.592 ***** 2025-09-02 01:07:05.166883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.166901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.166921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.166934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.166946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.166966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.166983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.166995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167075 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167087 | orchestrator | 2025-09-02 01:07:05.167098 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-02 01:07:05.167110 | orchestrator | Tuesday 02 September 2025 01:04:37 +0000 (0:00:03.539) 0:00:57.131 ***** 2025-09-02 01:07:05.167127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 01:07:05.167146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167159 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.167171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 01:07:05.167189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167201 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:07:05.167213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167242 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:07:05.167254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 01:07:05.167273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167285 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:07:05.167297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167435 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:07:05.167448 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:07:05.167459 | orchestrator | 2025-09-02 01:07:05.167470 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-02 01:07:05.167481 | orchestrator | Tuesday 02 September 2025 01:04:39 +0000 (0:00:01.656) 0:00:58.788 ***** 2025-09-02 01:07:05.167502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 01:07:05.167523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 01:07:05.167551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167563 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.167574 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:07:05.167585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 01:07:05.167607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167625 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:07:05.167637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167661 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:07:05.167672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167704 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:07:05.167724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.167754 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:07:05.167765 | orchestrator | 2025-09-02 01:07:05.167776 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-02 01:07:05.167787 | orchestrator | Tuesday 02 September 2025 01:04:41 +0000 (0:00:02.544) 0:01:01.333 ***** 2025-09-02 01:07:05.167798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.167815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.167827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.167846 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.167974 | orchestrator | 2025-09-02 01:07:05.167984 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-02 01:07:05.167994 | orchestrator | Tuesday 02 September 2025 01:04:44 +0000 (0:00:02.926) 0:01:04.259 ***** 2025-09-02 01:07:05.168004 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-02 01:07:05.168015 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-02 01:07:05.168024 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:07:05.168034 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:07:05.168044 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-02 01:07:05.168054 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:07:05.168064 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-02 01:07:05.168078 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-02 01:07:05.168088 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-02 01:07:05.168098 | orchestrator | 2025-09-02 01:07:05.168107 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-02 01:07:05.168117 | orchestrator | Tuesday 02 September 2025 01:04:46 +0000 (0:00:02.206) 0:01:06.465 ***** 2025-09-02 01:07:05.168128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.168163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.168188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.168212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168296 | orchestrator | 2025-09-02 01:07:05.168306 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-02 01:07:05.168316 | orchestrator | Tuesday 02 September 2025 01:04:56 +0000 (0:00:10.022) 0:01:16.488 ***** 2025-09-02 01:07:05.168332 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.168342 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:07:05.168352 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:07:05.168362 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:07:05.168372 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:07:05.168396 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:07:05.168406 | orchestrator | 2025-09-02 01:07:05.168416 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-02 01:07:05.168426 | orchestrator | Tuesday 02 September 2025 01:04:58 +0000 (0:00:02.023) 0:01:18.511 ***** 2025-09-02 01:07:05.168436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 01:07:05.168447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.168457 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.168472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 01:07:05.168491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.168502 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:07:05.168519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.168530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.168540 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:07:05.168550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-02 01:07:05.168561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.168579 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:07:05.168596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.168606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.168617 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:07:05.168634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.168645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-02 01:07:05.168655 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:07:05.168665 | orchestrator | 2025-09-02 01:07:05.168675 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-02 01:07:05.168684 | orchestrator | Tuesday 02 September 2025 01:05:00 +0000 (0:00:01.489) 0:01:20.001 ***** 2025-09-02 01:07:05.168694 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.168704 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:07:05.168714 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:07:05.168730 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:07:05.168740 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:07:05.168749 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:07:05.168759 | orchestrator | 2025-09-02 01:07:05.168769 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-02 01:07:05.168779 | orchestrator | Tuesday 02 September 2025 01:05:00 +0000 (0:00:00.661) 0:01:20.662 ***** 2025-09-02 01:07:05.168794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.168805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.168823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-02 01:07:05.168849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-02 01:07:05.168956 | orchestrator | 2025-09-02 01:07:05.168966 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-02 01:07:05.168975 | orchestrator | Tuesday 02 September 2025 01:05:03 +0000 (0:00:02.852) 0:01:23.515 ***** 2025-09-02 01:07:05.168985 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.168995 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:07:05.169005 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:07:05.169015 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:07:05.169024 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:07:05.169034 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:07:05.169044 | orchestrator | 2025-09-02 01:07:05.169053 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-02 01:07:05.169063 | orchestrator | Tuesday 02 September 2025 01:05:04 +0000 (0:00:00.614) 0:01:24.130 ***** 2025-09-02 01:07:05.169073 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:07:05.169083 | orchestrator | 2025-09-02 01:07:05.169093 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-02 01:07:05.169103 | orchestrator | Tuesday 02 September 2025 01:05:07 +0000 (0:00:02.711) 0:01:26.841 ***** 2025-09-02 01:07:05.169112 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:07:05.169122 | orchestrator | 2025-09-02 01:07:05.169132 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-02 01:07:05.169141 | orchestrator | Tuesday 02 September 2025 01:05:09 +0000 (0:00:02.253) 0:01:29.095 ***** 2025-09-02 01:07:05.169151 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:07:05.169161 | orchestrator | 2025-09-02 01:07:05.169171 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-02 01:07:05.169181 | orchestrator | Tuesday 02 September 2025 01:05:30 +0000 (0:00:21.519) 0:01:50.615 ***** 2025-09-02 01:07:05.169190 | orchestrator | 2025-09-02 01:07:05.169206 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-02 01:07:05.169216 | orchestrator | Tuesday 02 September 2025 01:05:30 +0000 (0:00:00.081) 0:01:50.696 ***** 2025-09-02 01:07:05.169225 | orchestrator | 2025-09-02 01:07:05.169235 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-02 01:07:05.169245 | orchestrator | Tuesday 02 September 2025 01:05:30 +0000 (0:00:00.061) 0:01:50.758 ***** 2025-09-02 01:07:05.169255 | orchestrator | 2025-09-02 01:07:05.169270 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-02 01:07:05.169280 | orchestrator | Tuesday 02 September 2025 01:05:31 +0000 (0:00:00.066) 0:01:50.824 ***** 2025-09-02 01:07:05.169290 | orchestrator | 2025-09-02 01:07:05.169300 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-02 01:07:05.169310 | orchestrator | Tuesday 02 September 2025 01:05:31 +0000 (0:00:00.068) 0:01:50.893 ***** 2025-09-02 01:07:05.169320 | orchestrator | 2025-09-02 01:07:05.169329 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-02 01:07:05.169339 | orchestrator | Tuesday 02 September 2025 01:05:31 +0000 (0:00:00.068) 0:01:50.961 ***** 2025-09-02 01:07:05.169349 | orchestrator | 2025-09-02 01:07:05.169358 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-02 01:07:05.169368 | orchestrator | Tuesday 02 September 2025 01:05:31 +0000 (0:00:00.071) 0:01:51.032 ***** 2025-09-02 01:07:05.169535 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:07:05.169693 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:07:05.169712 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:07:05.169724 | orchestrator | 2025-09-02 01:07:05.169736 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-02 01:07:05.169750 | orchestrator | Tuesday 02 September 2025 01:06:02 +0000 (0:00:31.063) 0:02:22.096 ***** 2025-09-02 01:07:05.169761 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:07:05.169773 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:07:05.169784 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:07:05.169795 | orchestrator | 2025-09-02 01:07:05.169806 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-02 01:07:05.169817 | orchestrator | Tuesday 02 September 2025 01:06:16 +0000 (0:00:13.875) 0:02:35.971 ***** 2025-09-02 01:07:05.169828 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:07:05.169839 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:07:05.169850 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:07:05.169861 | orchestrator | 2025-09-02 01:07:05.169872 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-02 01:07:05.169941 | orchestrator | Tuesday 02 September 2025 01:06:55 +0000 (0:00:39.295) 0:03:15.267 ***** 2025-09-02 01:07:05.169954 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:07:05.169965 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:07:05.169976 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:07:05.169987 | orchestrator | 2025-09-02 01:07:05.169998 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-02 01:07:05.170010 | orchestrator | Tuesday 02 September 2025 01:07:01 +0000 (0:00:05.660) 0:03:20.928 ***** 2025-09-02 01:07:05.170070 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:07:05.170082 | orchestrator | 2025-09-02 01:07:05.170094 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:07:05.170106 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-02 01:07:05.170120 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-02 01:07:05.170131 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-02 01:07:05.170165 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-02 01:07:05.170177 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-02 01:07:05.170187 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-02 01:07:05.170224 | orchestrator | 2025-09-02 01:07:05.170236 | orchestrator | 2025-09-02 01:07:05.170247 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:07:05.170258 | orchestrator | Tuesday 02 September 2025 01:07:01 +0000 (0:00:00.622) 0:03:21.551 ***** 2025-09-02 01:07:05.170269 | orchestrator | =============================================================================== 2025-09-02 01:07:05.170280 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 39.30s 2025-09-02 01:07:05.170292 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 31.06s 2025-09-02 01:07:05.170303 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.52s 2025-09-02 01:07:05.170313 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.88s 2025-09-02 01:07:05.170325 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.02s 2025-09-02 01:07:05.170336 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.16s 2025-09-02 01:07:05.170347 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.56s 2025-09-02 01:07:05.170358 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.66s 2025-09-02 01:07:05.170604 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.61s 2025-09-02 01:07:05.170624 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.09s 2025-09-02 01:07:05.170635 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.79s 2025-09-02 01:07:05.170646 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.76s 2025-09-02 01:07:05.170657 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.64s 2025-09-02 01:07:05.170668 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.57s 2025-09-02 01:07:05.170679 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.54s 2025-09-02 01:07:05.170690 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.93s 2025-09-02 01:07:05.170701 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.85s 2025-09-02 01:07:05.170711 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.71s 2025-09-02 01:07:05.170722 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.70s 2025-09-02 01:07:05.170733 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.54s 2025-09-02 01:07:05.170744 | orchestrator | 2025-09-02 01:07:05 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:05.170756 | orchestrator | 2025-09-02 01:07:05 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:05.172733 | orchestrator | 2025-09-02 01:07:05 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:05.173329 | orchestrator | 2025-09-02 01:07:05 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:08.220488 | orchestrator | 2025-09-02 01:07:08 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:08.222570 | orchestrator | 2025-09-02 01:07:08 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:08.224458 | orchestrator | 2025-09-02 01:07:08 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:08.225179 | orchestrator | 2025-09-02 01:07:08 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:11.266651 | orchestrator | 2025-09-02 01:07:11 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:11.268895 | orchestrator | 2025-09-02 01:07:11 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:11.271302 | orchestrator | 2025-09-02 01:07:11 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:11.271954 | orchestrator | 2025-09-02 01:07:11 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:14.312142 | orchestrator | 2025-09-02 01:07:14 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:14.313482 | orchestrator | 2025-09-02 01:07:14 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:14.315763 | orchestrator | 2025-09-02 01:07:14 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:14.315851 | orchestrator | 2025-09-02 01:07:14 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:17.354236 | orchestrator | 2025-09-02 01:07:17 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:17.355465 | orchestrator | 2025-09-02 01:07:17 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:17.357237 | orchestrator | 2025-09-02 01:07:17 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:17.357297 | orchestrator | 2025-09-02 01:07:17 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:20.393655 | orchestrator | 2025-09-02 01:07:20 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:20.395444 | orchestrator | 2025-09-02 01:07:20 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:20.396614 | orchestrator | 2025-09-02 01:07:20 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:20.396991 | orchestrator | 2025-09-02 01:07:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:23.443806 | orchestrator | 2025-09-02 01:07:23 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:23.446809 | orchestrator | 2025-09-02 01:07:23 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:23.448734 | orchestrator | 2025-09-02 01:07:23 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:23.448851 | orchestrator | 2025-09-02 01:07:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:26.493985 | orchestrator | 2025-09-02 01:07:26 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:26.495415 | orchestrator | 2025-09-02 01:07:26 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:26.496705 | orchestrator | 2025-09-02 01:07:26 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:26.496730 | orchestrator | 2025-09-02 01:07:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:29.547470 | orchestrator | 2025-09-02 01:07:29 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:29.548604 | orchestrator | 2025-09-02 01:07:29 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:29.550000 | orchestrator | 2025-09-02 01:07:29 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:29.550073 | orchestrator | 2025-09-02 01:07:29 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:32.602436 | orchestrator | 2025-09-02 01:07:32 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:32.603883 | orchestrator | 2025-09-02 01:07:32 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:32.605965 | orchestrator | 2025-09-02 01:07:32 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:32.606068 | orchestrator | 2025-09-02 01:07:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:35.652468 | orchestrator | 2025-09-02 01:07:35 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:35.654003 | orchestrator | 2025-09-02 01:07:35 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:35.654404 | orchestrator | 2025-09-02 01:07:35 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state STARTED 2025-09-02 01:07:35.654427 | orchestrator | 2025-09-02 01:07:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:38.696196 | orchestrator | 2025-09-02 01:07:38 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:38.697509 | orchestrator | 2025-09-02 01:07:38 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:38.698782 | orchestrator | 2025-09-02 01:07:38 | INFO  | Task 247dea3a-b39f-4a13-8bc2-8a2c0ae6368c is in state SUCCESS 2025-09-02 01:07:38.698842 | orchestrator | 2025-09-02 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:41.744329 | orchestrator | 2025-09-02 01:07:41 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:41.746870 | orchestrator | 2025-09-02 01:07:41 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:41.746903 | orchestrator | 2025-09-02 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:44.783076 | orchestrator | 2025-09-02 01:07:44 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:44.784836 | orchestrator | 2025-09-02 01:07:44 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:44.784874 | orchestrator | 2025-09-02 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:47.824487 | orchestrator | 2025-09-02 01:07:47 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:47.825633 | orchestrator | 2025-09-02 01:07:47 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:47.825981 | orchestrator | 2025-09-02 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:50.872624 | orchestrator | 2025-09-02 01:07:50 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:50.877467 | orchestrator | 2025-09-02 01:07:50 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:50.878111 | orchestrator | 2025-09-02 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:53.921440 | orchestrator | 2025-09-02 01:07:53 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:53.922530 | orchestrator | 2025-09-02 01:07:53 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:53.922694 | orchestrator | 2025-09-02 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:07:56.970395 | orchestrator | 2025-09-02 01:07:56 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:07:56.972605 | orchestrator | 2025-09-02 01:07:56 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state STARTED 2025-09-02 01:07:56.972637 | orchestrator | 2025-09-02 01:07:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:00.019298 | orchestrator | 2025-09-02 01:08:00 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:00.019458 | orchestrator | 2025-09-02 01:08:00 | INFO  | Task b9cae409-74a3-4890-b0d9-0e9419e89a9a is in state SUCCESS 2025-09-02 01:08:00.019520 | orchestrator | 2025-09-02 01:08:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:03.079316 | orchestrator | 2025-09-02 01:08:03 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:03.079450 | orchestrator | 2025-09-02 01:08:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:06.125108 | orchestrator | 2025-09-02 01:08:06 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:06.125206 | orchestrator | 2025-09-02 01:08:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:09.161500 | orchestrator | 2025-09-02 01:08:09 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:09.161607 | orchestrator | 2025-09-02 01:08:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:12.199323 | orchestrator | 2025-09-02 01:08:12 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:12.199474 | orchestrator | 2025-09-02 01:08:12 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:15.244805 | orchestrator | 2025-09-02 01:08:15 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:15.245020 | orchestrator | 2025-09-02 01:08:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:18.291703 | orchestrator | 2025-09-02 01:08:18 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:18.291801 | orchestrator | 2025-09-02 01:08:18 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:21.324677 | orchestrator | 2025-09-02 01:08:21 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:21.324778 | orchestrator | 2025-09-02 01:08:21 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:24.360539 | orchestrator | 2025-09-02 01:08:24 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:24.360636 | orchestrator | 2025-09-02 01:08:24 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:27.402471 | orchestrator | 2025-09-02 01:08:27 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:27.402580 | orchestrator | 2025-09-02 01:08:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:30.446529 | orchestrator | 2025-09-02 01:08:30 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:30.446768 | orchestrator | 2025-09-02 01:08:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:33.491909 | orchestrator | 2025-09-02 01:08:33 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:33.492012 | orchestrator | 2025-09-02 01:08:33 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:36.534103 | orchestrator | 2025-09-02 01:08:36 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:36.534201 | orchestrator | 2025-09-02 01:08:36 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:39.582234 | orchestrator | 2025-09-02 01:08:39 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:39.582333 | orchestrator | 2025-09-02 01:08:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:42.624297 | orchestrator | 2025-09-02 01:08:42 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:42.624464 | orchestrator | 2025-09-02 01:08:42 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:45.666478 | orchestrator | 2025-09-02 01:08:45 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:45.666738 | orchestrator | 2025-09-02 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:48.703586 | orchestrator | 2025-09-02 01:08:48 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:48.703717 | orchestrator | 2025-09-02 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:51.739379 | orchestrator | 2025-09-02 01:08:51 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:51.739476 | orchestrator | 2025-09-02 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:54.783779 | orchestrator | 2025-09-02 01:08:54 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:54.783879 | orchestrator | 2025-09-02 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:08:57.818662 | orchestrator | 2025-09-02 01:08:57 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:08:57.818764 | orchestrator | 2025-09-02 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:00.856782 | orchestrator | 2025-09-02 01:09:00 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:00.856900 | orchestrator | 2025-09-02 01:09:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:03.915290 | orchestrator | 2025-09-02 01:09:03 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:03.915469 | orchestrator | 2025-09-02 01:09:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:06.942272 | orchestrator | 2025-09-02 01:09:06 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:06.942433 | orchestrator | 2025-09-02 01:09:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:09.976898 | orchestrator | 2025-09-02 01:09:09 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:09.977915 | orchestrator | 2025-09-02 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:13.026714 | orchestrator | 2025-09-02 01:09:13 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:13.027267 | orchestrator | 2025-09-02 01:09:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:16.071884 | orchestrator | 2025-09-02 01:09:16 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:16.071985 | orchestrator | 2025-09-02 01:09:16 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:19.113971 | orchestrator | 2025-09-02 01:09:19 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:19.114130 | orchestrator | 2025-09-02 01:09:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:22.155727 | orchestrator | 2025-09-02 01:09:22 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:22.155827 | orchestrator | 2025-09-02 01:09:22 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:25.204970 | orchestrator | 2025-09-02 01:09:25 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:25.205071 | orchestrator | 2025-09-02 01:09:25 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:28.251036 | orchestrator | 2025-09-02 01:09:28 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:28.251218 | orchestrator | 2025-09-02 01:09:28 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:31.295218 | orchestrator | 2025-09-02 01:09:31 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:31.295398 | orchestrator | 2025-09-02 01:09:31 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:34.341216 | orchestrator | 2025-09-02 01:09:34 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:34.341319 | orchestrator | 2025-09-02 01:09:34 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:37.381625 | orchestrator | 2025-09-02 01:09:37 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:37.381725 | orchestrator | 2025-09-02 01:09:37 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:40.425219 | orchestrator | 2025-09-02 01:09:40 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:40.425401 | orchestrator | 2025-09-02 01:09:40 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:43.463494 | orchestrator | 2025-09-02 01:09:43 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:43.463586 | orchestrator | 2025-09-02 01:09:43 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:46.503782 | orchestrator | 2025-09-02 01:09:46 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:46.503895 | orchestrator | 2025-09-02 01:09:46 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:49.536195 | orchestrator | 2025-09-02 01:09:49 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:49.536298 | orchestrator | 2025-09-02 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:52.583878 | orchestrator | 2025-09-02 01:09:52 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:52.583985 | orchestrator | 2025-09-02 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:55.626177 | orchestrator | 2025-09-02 01:09:55 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:55.626281 | orchestrator | 2025-09-02 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:09:58.668606 | orchestrator | 2025-09-02 01:09:58 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:09:58.668735 | orchestrator | 2025-09-02 01:09:58 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:01.716803 | orchestrator | 2025-09-02 01:10:01 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:01.716932 | orchestrator | 2025-09-02 01:10:01 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:04.755273 | orchestrator | 2025-09-02 01:10:04 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:04.755422 | orchestrator | 2025-09-02 01:10:04 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:07.803790 | orchestrator | 2025-09-02 01:10:07 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:07.803877 | orchestrator | 2025-09-02 01:10:07 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:10.852508 | orchestrator | 2025-09-02 01:10:10 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:10.852593 | orchestrator | 2025-09-02 01:10:10 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:13.897812 | orchestrator | 2025-09-02 01:10:13 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:13.897912 | orchestrator | 2025-09-02 01:10:13 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:16.943781 | orchestrator | 2025-09-02 01:10:16 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:16.943918 | orchestrator | 2025-09-02 01:10:16 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:19.995807 | orchestrator | 2025-09-02 01:10:19 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:19.995922 | orchestrator | 2025-09-02 01:10:19 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:23.045158 | orchestrator | 2025-09-02 01:10:23 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:23.045265 | orchestrator | 2025-09-02 01:10:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:26.088771 | orchestrator | 2025-09-02 01:10:26 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:26.088876 | orchestrator | 2025-09-02 01:10:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:29.138223 | orchestrator | 2025-09-02 01:10:29 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:29.138335 | orchestrator | 2025-09-02 01:10:29 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:32.185642 | orchestrator | 2025-09-02 01:10:32 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:32.185755 | orchestrator | 2025-09-02 01:10:32 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:35.232547 | orchestrator | 2025-09-02 01:10:35 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:35.232641 | orchestrator | 2025-09-02 01:10:35 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:38.278114 | orchestrator | 2025-09-02 01:10:38 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:38.278222 | orchestrator | 2025-09-02 01:10:38 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:41.321505 | orchestrator | 2025-09-02 01:10:41 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:41.321609 | orchestrator | 2025-09-02 01:10:41 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:44.376208 | orchestrator | 2025-09-02 01:10:44 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:44.376319 | orchestrator | 2025-09-02 01:10:44 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:47.419659 | orchestrator | 2025-09-02 01:10:47 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:47.419757 | orchestrator | 2025-09-02 01:10:47 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:50.464939 | orchestrator | 2025-09-02 01:10:50 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:50.465045 | orchestrator | 2025-09-02 01:10:50 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:53.507087 | orchestrator | 2025-09-02 01:10:53 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:53.507191 | orchestrator | 2025-09-02 01:10:53 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:56.547880 | orchestrator | 2025-09-02 01:10:56 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:56.547986 | orchestrator | 2025-09-02 01:10:56 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:10:59.591132 | orchestrator | 2025-09-02 01:10:59 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:10:59.591255 | orchestrator | 2025-09-02 01:10:59 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:02.639631 | orchestrator | 2025-09-02 01:11:02 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:02.639824 | orchestrator | 2025-09-02 01:11:02 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:05.683527 | orchestrator | 2025-09-02 01:11:05 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:05.683654 | orchestrator | 2025-09-02 01:11:05 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:08.730993 | orchestrator | 2025-09-02 01:11:08 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:08.731093 | orchestrator | 2025-09-02 01:11:08 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:11.776209 | orchestrator | 2025-09-02 01:11:11 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:11.776315 | orchestrator | 2025-09-02 01:11:11 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:14.818428 | orchestrator | 2025-09-02 01:11:14 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:14.818560 | orchestrator | 2025-09-02 01:11:14 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:17.858510 | orchestrator | 2025-09-02 01:11:17 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:17.858613 | orchestrator | 2025-09-02 01:11:17 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:20.904054 | orchestrator | 2025-09-02 01:11:20 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:20.904189 | orchestrator | 2025-09-02 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:23.946898 | orchestrator | 2025-09-02 01:11:23 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:23.947033 | orchestrator | 2025-09-02 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:26.987698 | orchestrator | 2025-09-02 01:11:26 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:26.987826 | orchestrator | 2025-09-02 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:30.029493 | orchestrator | 2025-09-02 01:11:30 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:30.029619 | orchestrator | 2025-09-02 01:11:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:33.075292 | orchestrator | 2025-09-02 01:11:33 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:33.075455 | orchestrator | 2025-09-02 01:11:33 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:36.113138 | orchestrator | 2025-09-02 01:11:36 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:36.113238 | orchestrator | 2025-09-02 01:11:36 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:39.157723 | orchestrator | 2025-09-02 01:11:39 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:39.157823 | orchestrator | 2025-09-02 01:11:39 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:42.196458 | orchestrator | 2025-09-02 01:11:42 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:42.196563 | orchestrator | 2025-09-02 01:11:42 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:45.240742 | orchestrator | 2025-09-02 01:11:45 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:45.240847 | orchestrator | 2025-09-02 01:11:45 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:48.281490 | orchestrator | 2025-09-02 01:11:48 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:48.281620 | orchestrator | 2025-09-02 01:11:48 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:51.312278 | orchestrator | 2025-09-02 01:11:51 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:51.312363 | orchestrator | 2025-09-02 01:11:51 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:54.334403 | orchestrator | 2025-09-02 01:11:54 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:54.334518 | orchestrator | 2025-09-02 01:11:54 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:11:57.368196 | orchestrator | 2025-09-02 01:11:57 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:11:57.368293 | orchestrator | 2025-09-02 01:11:57 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:00.398388 | orchestrator | 2025-09-02 01:12:00 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:00.398504 | orchestrator | 2025-09-02 01:12:00 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:03.435544 | orchestrator | 2025-09-02 01:12:03 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:03.435632 | orchestrator | 2025-09-02 01:12:03 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:06.482341 | orchestrator | 2025-09-02 01:12:06 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:06.482485 | orchestrator | 2025-09-02 01:12:06 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:09.538346 | orchestrator | 2025-09-02 01:12:09 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:09.538485 | orchestrator | 2025-09-02 01:12:09 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:12.584400 | orchestrator | 2025-09-02 01:12:12 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:12.584512 | orchestrator | 2025-09-02 01:12:12 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:15.644781 | orchestrator | 2025-09-02 01:12:15 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:15.644880 | orchestrator | 2025-09-02 01:12:15 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:18.690486 | orchestrator | 2025-09-02 01:12:18 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:18.690620 | orchestrator | 2025-09-02 01:12:18 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:21.740871 | orchestrator | 2025-09-02 01:12:21 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:21.741007 | orchestrator | 2025-09-02 01:12:21 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:24.787070 | orchestrator | 2025-09-02 01:12:24 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:24.787229 | orchestrator | 2025-09-02 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:27.844984 | orchestrator | 2025-09-02 01:12:27 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:27.845121 | orchestrator | 2025-09-02 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:30.892392 | orchestrator | 2025-09-02 01:12:30 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state STARTED 2025-09-02 01:12:30.892571 | orchestrator | 2025-09-02 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-09-02 01:12:33.940285 | orchestrator | 2025-09-02 01:12:33 | INFO  | Task bdd8faff-c5fc-496b-96ee-b0cc88184b4d is in state SUCCESS 2025-09-02 01:12:33.941482 | orchestrator | 2025-09-02 01:12:33.941510 | orchestrator | 2025-09-02 01:12:33.941521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:12:33.941532 | orchestrator | 2025-09-02 01:12:33.941541 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:12:33.941550 | orchestrator | Tuesday 02 September 2025 01:06:41 +0000 (0:00:00.271) 0:00:00.271 ***** 2025-09-02 01:12:33.941559 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.941569 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:12:33.941578 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:12:33.941586 | orchestrator | 2025-09-02 01:12:33.941595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:12:33.941604 | orchestrator | Tuesday 02 September 2025 01:06:42 +0000 (0:00:00.322) 0:00:00.593 ***** 2025-09-02 01:12:33.941613 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-02 01:12:33.941622 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-02 01:12:33.941631 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-02 01:12:33.941691 | orchestrator | 2025-09-02 01:12:33.941750 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-02 01:12:33.941760 | orchestrator | 2025-09-02 01:12:33.941769 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-02 01:12:33.941778 | orchestrator | Tuesday 02 September 2025 01:06:42 +0000 (0:00:00.438) 0:00:01.032 ***** 2025-09-02 01:12:33.941786 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:12:33.941796 | orchestrator | 2025-09-02 01:12:33.941805 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-02 01:12:33.941814 | orchestrator | Tuesday 02 September 2025 01:06:43 +0000 (0:00:00.530) 0:00:01.563 ***** 2025-09-02 01:12:33.941823 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-02 01:12:33.941832 | orchestrator | 2025-09-02 01:12:33.941841 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-02 01:12:33.941849 | orchestrator | Tuesday 02 September 2025 01:06:46 +0000 (0:00:03.594) 0:00:05.158 ***** 2025-09-02 01:12:33.941858 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-02 01:12:33.941868 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-02 01:12:33.941877 | orchestrator | 2025-09-02 01:12:33.941885 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-02 01:12:33.941894 | orchestrator | Tuesday 02 September 2025 01:06:53 +0000 (0:00:06.325) 0:00:11.483 ***** 2025-09-02 01:12:33.941902 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-02 01:12:33.941911 | orchestrator | 2025-09-02 01:12:33.941920 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-02 01:12:33.941928 | orchestrator | Tuesday 02 September 2025 01:06:56 +0000 (0:00:03.471) 0:00:14.955 ***** 2025-09-02 01:12:33.941937 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:12:33.941946 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-02 01:12:33.941955 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-02 01:12:33.941964 | orchestrator | 2025-09-02 01:12:33.941972 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-02 01:12:33.941981 | orchestrator | Tuesday 02 September 2025 01:07:04 +0000 (0:00:08.071) 0:00:23.027 ***** 2025-09-02 01:12:33.941990 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-02 01:12:33.942078 | orchestrator | 2025-09-02 01:12:33.942093 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-02 01:12:33.942102 | orchestrator | Tuesday 02 September 2025 01:07:08 +0000 (0:00:03.493) 0:00:26.521 ***** 2025-09-02 01:12:33.942111 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-02 01:12:33.942131 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-02 01:12:33.942140 | orchestrator | 2025-09-02 01:12:33.942149 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-02 01:12:33.942157 | orchestrator | Tuesday 02 September 2025 01:07:15 +0000 (0:00:07.526) 0:00:34.047 ***** 2025-09-02 01:12:33.942166 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-02 01:12:33.942174 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-02 01:12:33.942184 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-02 01:12:33.942193 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-02 01:12:33.942201 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-02 01:12:33.942210 | orchestrator | 2025-09-02 01:12:33.942219 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-02 01:12:33.942240 | orchestrator | Tuesday 02 September 2025 01:07:31 +0000 (0:00:15.975) 0:00:50.022 ***** 2025-09-02 01:12:33.942249 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:12:33.942258 | orchestrator | 2025-09-02 01:12:33.942267 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-02 01:12:33.942275 | orchestrator | Tuesday 02 September 2025 01:07:32 +0000 (0:00:00.580) 0:00:50.603 ***** 2025-09-02 01:12:33.942297 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-09-02 01:12:33.942310 | orchestrator | 2025-09-02 01:12:33.942319 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:12:33.942329 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-02 01:12:33.942340 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:12:33.942349 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:12:33.942358 | orchestrator | 2025-09-02 01:12:33.942366 | orchestrator | 2025-09-02 01:12:33.942375 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:12:33.942384 | orchestrator | Tuesday 02 September 2025 01:07:35 +0000 (0:00:03.386) 0:00:53.989 ***** 2025-09-02 01:12:33.942393 | orchestrator | =============================================================================== 2025-09-02 01:12:33.942401 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.98s 2025-09-02 01:12:33.942410 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.07s 2025-09-02 01:12:33.942418 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.53s 2025-09-02 01:12:33.942427 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.33s 2025-09-02 01:12:33.942436 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.59s 2025-09-02 01:12:33.942461 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.49s 2025-09-02 01:12:33.942470 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.47s 2025-09-02 01:12:33.942478 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.39s 2025-09-02 01:12:33.942487 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.58s 2025-09-02 01:12:33.942501 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.53s 2025-09-02 01:12:33.942510 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-02 01:12:33.942519 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-02 01:12:33.942527 | orchestrator | 2025-09-02 01:12:33.942536 | orchestrator | 2025-09-02 01:12:33.942545 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:12:33.942553 | orchestrator | 2025-09-02 01:12:33.942562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:12:33.942570 | orchestrator | Tuesday 02 September 2025 01:06:21 +0000 (0:00:00.279) 0:00:00.279 ***** 2025-09-02 01:12:33.942580 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.942589 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:12:33.942598 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:12:33.942606 | orchestrator | 2025-09-02 01:12:33.942615 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:12:33.942623 | orchestrator | Tuesday 02 September 2025 01:06:21 +0000 (0:00:00.519) 0:00:00.799 ***** 2025-09-02 01:12:33.942632 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-02 01:12:33.942641 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-02 01:12:33.942649 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-02 01:12:33.942658 | orchestrator | 2025-09-02 01:12:33.942667 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-02 01:12:33.942675 | orchestrator | 2025-09-02 01:12:33.942684 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-02 01:12:33.942693 | orchestrator | Tuesday 02 September 2025 01:06:22 +0000 (0:00:01.067) 0:00:01.866 ***** 2025-09-02 01:12:33.942702 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.942710 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:12:33.942719 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:12:33.942728 | orchestrator | 2025-09-02 01:12:33.942737 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:12:33.942746 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:12:33.942754 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:12:33.942764 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:12:33.942773 | orchestrator | 2025-09-02 01:12:33.942782 | orchestrator | 2025-09-02 01:12:33.942790 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:12:33.942799 | orchestrator | Tuesday 02 September 2025 01:07:57 +0000 (0:01:34.700) 0:01:36.567 ***** 2025-09-02 01:12:33.942808 | orchestrator | =============================================================================== 2025-09-02 01:12:33.942817 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 94.70s 2025-09-02 01:12:33.942825 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.07s 2025-09-02 01:12:33.942834 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2025-09-02 01:12:33.942843 | orchestrator | 2025-09-02 01:12:33.942852 | orchestrator | 2025-09-02 01:12:33.942860 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-02 01:12:33.942869 | orchestrator | 2025-09-02 01:12:33.942878 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-02 01:12:33.942891 | orchestrator | Tuesday 02 September 2025 01:03:49 +0000 (0:00:00.290) 0:00:00.290 ***** 2025-09-02 01:12:33.942900 | orchestrator | changed: [testbed-manager] 2025-09-02 01:12:33.942910 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.942918 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:12:33.942927 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:12:33.942941 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.942950 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.942959 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.942968 | orchestrator | 2025-09-02 01:12:33.942976 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-02 01:12:33.942985 | orchestrator | Tuesday 02 September 2025 01:03:50 +0000 (0:00:00.978) 0:00:01.269 ***** 2025-09-02 01:12:33.942994 | orchestrator | changed: [testbed-manager] 2025-09-02 01:12:33.943003 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.943012 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:12:33.943020 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:12:33.943029 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.943038 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.943046 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.943055 | orchestrator | 2025-09-02 01:12:33.943064 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-02 01:12:33.943073 | orchestrator | Tuesday 02 September 2025 01:03:50 +0000 (0:00:00.801) 0:00:02.070 ***** 2025-09-02 01:12:33.943082 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-02 01:12:33.943091 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-02 01:12:33.943099 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-02 01:12:33.943108 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-02 01:12:33.943117 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-02 01:12:33.943125 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-02 01:12:33.943134 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-02 01:12:33.943143 | orchestrator | 2025-09-02 01:12:33.943187 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-02 01:12:33.943197 | orchestrator | 2025-09-02 01:12:33.943206 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-02 01:12:33.943215 | orchestrator | Tuesday 02 September 2025 01:03:51 +0000 (0:00:00.981) 0:00:03.051 ***** 2025-09-02 01:12:33.943224 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:12:33.943233 | orchestrator | 2025-09-02 01:12:33.943241 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-02 01:12:33.943250 | orchestrator | Tuesday 02 September 2025 01:03:52 +0000 (0:00:00.758) 0:00:03.809 ***** 2025-09-02 01:12:33.943259 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-02 01:12:33.943268 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-02 01:12:33.943277 | orchestrator | 2025-09-02 01:12:33.943285 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-02 01:12:33.943294 | orchestrator | Tuesday 02 September 2025 01:03:57 +0000 (0:00:04.605) 0:00:08.415 ***** 2025-09-02 01:12:33.943303 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-02 01:12:33.943311 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-02 01:12:33.943320 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.943329 | orchestrator | 2025-09-02 01:12:33.943338 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-02 01:12:33.943347 | orchestrator | Tuesday 02 September 2025 01:04:01 +0000 (0:00:04.366) 0:00:12.781 ***** 2025-09-02 01:12:33.943355 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.943364 | orchestrator | 2025-09-02 01:12:33.943373 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-02 01:12:33.943381 | orchestrator | Tuesday 02 September 2025 01:04:02 +0000 (0:00:00.780) 0:00:13.562 ***** 2025-09-02 01:12:33.943390 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.943399 | orchestrator | 2025-09-02 01:12:33.943408 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-02 01:12:33.943416 | orchestrator | Tuesday 02 September 2025 01:04:03 +0000 (0:00:01.274) 0:00:14.836 ***** 2025-09-02 01:12:33.943430 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.943450 | orchestrator | 2025-09-02 01:12:33.943459 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-02 01:12:33.943468 | orchestrator | Tuesday 02 September 2025 01:04:06 +0000 (0:00:02.887) 0:00:17.724 ***** 2025-09-02 01:12:33.943477 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.943485 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.943494 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.943503 | orchestrator | 2025-09-02 01:12:33.943512 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-02 01:12:33.943520 | orchestrator | Tuesday 02 September 2025 01:04:07 +0000 (0:00:00.554) 0:00:18.279 ***** 2025-09-02 01:12:33.943529 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.943538 | orchestrator | 2025-09-02 01:12:33.943550 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-02 01:12:33.943559 | orchestrator | Tuesday 02 September 2025 01:04:38 +0000 (0:00:31.067) 0:00:49.346 ***** 2025-09-02 01:12:33.943568 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.943576 | orchestrator | 2025-09-02 01:12:33.943585 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-02 01:12:33.943594 | orchestrator | Tuesday 02 September 2025 01:04:52 +0000 (0:00:14.544) 0:01:03.891 ***** 2025-09-02 01:12:33.943602 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.943611 | orchestrator | 2025-09-02 01:12:33.943619 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-02 01:12:33.943628 | orchestrator | Tuesday 02 September 2025 01:05:05 +0000 (0:00:12.440) 0:01:16.331 ***** 2025-09-02 01:12:33.943637 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.943645 | orchestrator | 2025-09-02 01:12:33.943654 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-02 01:12:33.943662 | orchestrator | Tuesday 02 September 2025 01:05:06 +0000 (0:00:01.344) 0:01:17.676 ***** 2025-09-02 01:12:33.943671 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.943680 | orchestrator | 2025-09-02 01:12:33.943694 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-02 01:12:33.943703 | orchestrator | Tuesday 02 September 2025 01:05:07 +0000 (0:00:00.544) 0:01:18.220 ***** 2025-09-02 01:12:33.943712 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:12:33.943721 | orchestrator | 2025-09-02 01:12:33.943729 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-02 01:12:33.943738 | orchestrator | Tuesday 02 September 2025 01:05:07 +0000 (0:00:00.563) 0:01:18.784 ***** 2025-09-02 01:12:33.943747 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.943755 | orchestrator | 2025-09-02 01:12:33.943764 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-02 01:12:33.943773 | orchestrator | Tuesday 02 September 2025 01:05:25 +0000 (0:00:17.829) 0:01:36.613 ***** 2025-09-02 01:12:33.943781 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.943790 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.943798 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.943807 | orchestrator | 2025-09-02 01:12:33.943816 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-02 01:12:33.943824 | orchestrator | 2025-09-02 01:12:33.943833 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-02 01:12:33.943842 | orchestrator | Tuesday 02 September 2025 01:05:25 +0000 (0:00:00.330) 0:01:36.944 ***** 2025-09-02 01:12:33.943850 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:12:33.943859 | orchestrator | 2025-09-02 01:12:33.943867 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-02 01:12:33.943876 | orchestrator | Tuesday 02 September 2025 01:05:26 +0000 (0:00:00.597) 0:01:37.542 ***** 2025-09-02 01:12:33.943884 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.943893 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.943907 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.943916 | orchestrator | 2025-09-02 01:12:33.943924 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-02 01:12:33.943933 | orchestrator | Tuesday 02 September 2025 01:05:28 +0000 (0:00:02.141) 0:01:39.683 ***** 2025-09-02 01:12:33.943942 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.943950 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.943959 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.943968 | orchestrator | 2025-09-02 01:12:33.943976 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-02 01:12:33.943985 | orchestrator | Tuesday 02 September 2025 01:05:30 +0000 (0:00:02.129) 0:01:41.813 ***** 2025-09-02 01:12:33.943994 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.944002 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944011 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944019 | orchestrator | 2025-09-02 01:12:33.944028 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-02 01:12:33.944037 | orchestrator | Tuesday 02 September 2025 01:05:31 +0000 (0:00:00.342) 0:01:42.155 ***** 2025-09-02 01:12:33.944045 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-02 01:12:33.944054 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944063 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-02 01:12:33.944071 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944080 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-02 01:12:33.944088 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-02 01:12:33.944097 | orchestrator | 2025-09-02 01:12:33.944106 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-02 01:12:33.944114 | orchestrator | Tuesday 02 September 2025 01:05:41 +0000 (0:00:09.981) 0:01:52.137 ***** 2025-09-02 01:12:33.944123 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.944131 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944140 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944148 | orchestrator | 2025-09-02 01:12:33.944157 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-02 01:12:33.944166 | orchestrator | Tuesday 02 September 2025 01:05:41 +0000 (0:00:00.412) 0:01:52.550 ***** 2025-09-02 01:12:33.944174 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-02 01:12:33.944183 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.944191 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-02 01:12:33.944200 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944209 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-02 01:12:33.944217 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944226 | orchestrator | 2025-09-02 01:12:33.944234 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-02 01:12:33.944243 | orchestrator | Tuesday 02 September 2025 01:05:42 +0000 (0:00:00.795) 0:01:53.346 ***** 2025-09-02 01:12:33.944252 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944260 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944269 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.944277 | orchestrator | 2025-09-02 01:12:33.944290 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-02 01:12:33.944299 | orchestrator | Tuesday 02 September 2025 01:05:42 +0000 (0:00:00.532) 0:01:53.879 ***** 2025-09-02 01:12:33.944307 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944316 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944324 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.944333 | orchestrator | 2025-09-02 01:12:33.944341 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-02 01:12:33.944350 | orchestrator | Tuesday 02 September 2025 01:05:43 +0000 (0:00:01.071) 0:01:54.950 ***** 2025-09-02 01:12:33.944358 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944372 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944381 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.944389 | orchestrator | 2025-09-02 01:12:33.944398 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-02 01:12:33.944407 | orchestrator | Tuesday 02 September 2025 01:05:46 +0000 (0:00:02.570) 0:01:57.521 ***** 2025-09-02 01:12:33.944420 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944429 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944438 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.944460 | orchestrator | 2025-09-02 01:12:33.944468 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-02 01:12:33.944477 | orchestrator | Tuesday 02 September 2025 01:06:08 +0000 (0:00:21.611) 0:02:19.132 ***** 2025-09-02 01:12:33.944486 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944495 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944503 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.944512 | orchestrator | 2025-09-02 01:12:33.944521 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-02 01:12:33.944529 | orchestrator | Tuesday 02 September 2025 01:06:20 +0000 (0:00:12.048) 0:02:31.180 ***** 2025-09-02 01:12:33.944538 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944547 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944556 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.944564 | orchestrator | 2025-09-02 01:12:33.944573 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-02 01:12:33.944582 | orchestrator | Tuesday 02 September 2025 01:06:21 +0000 (0:00:01.469) 0:02:32.650 ***** 2025-09-02 01:12:33.944590 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944599 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944608 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.944616 | orchestrator | 2025-09-02 01:12:33.944625 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-02 01:12:33.944634 | orchestrator | Tuesday 02 September 2025 01:06:33 +0000 (0:00:11.848) 0:02:44.498 ***** 2025-09-02 01:12:33.944643 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.944651 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944660 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944669 | orchestrator | 2025-09-02 01:12:33.944677 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-02 01:12:33.944686 | orchestrator | Tuesday 02 September 2025 01:06:34 +0000 (0:00:01.138) 0:02:45.637 ***** 2025-09-02 01:12:33.944695 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.944704 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.944712 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.944721 | orchestrator | 2025-09-02 01:12:33.944729 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-02 01:12:33.944738 | orchestrator | 2025-09-02 01:12:33.944747 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-02 01:12:33.944756 | orchestrator | Tuesday 02 September 2025 01:06:35 +0000 (0:00:00.511) 0:02:46.148 ***** 2025-09-02 01:12:33.944764 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:12:33.944773 | orchestrator | 2025-09-02 01:12:33.944782 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-02 01:12:33.944791 | orchestrator | Tuesday 02 September 2025 01:06:35 +0000 (0:00:00.577) 0:02:46.726 ***** 2025-09-02 01:12:33.944799 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-02 01:12:33.944809 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-02 01:12:33.944817 | orchestrator | 2025-09-02 01:12:33.944826 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-02 01:12:33.944835 | orchestrator | Tuesday 02 September 2025 01:06:39 +0000 (0:00:03.429) 0:02:50.155 ***** 2025-09-02 01:12:33.944844 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-02 01:12:33.944858 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-02 01:12:33.944867 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-02 01:12:33.944876 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-02 01:12:33.944885 | orchestrator | 2025-09-02 01:12:33.944893 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-02 01:12:33.944902 | orchestrator | Tuesday 02 September 2025 01:06:45 +0000 (0:00:06.698) 0:02:56.854 ***** 2025-09-02 01:12:33.944911 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-02 01:12:33.944920 | orchestrator | 2025-09-02 01:12:33.944928 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-02 01:12:33.944937 | orchestrator | Tuesday 02 September 2025 01:06:49 +0000 (0:00:03.285) 0:03:00.139 ***** 2025-09-02 01:12:33.944946 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-02 01:12:33.944955 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-02 01:12:33.944963 | orchestrator | 2025-09-02 01:12:33.944972 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-02 01:12:33.944985 | orchestrator | Tuesday 02 September 2025 01:06:52 +0000 (0:00:03.781) 0:03:03.921 ***** 2025-09-02 01:12:33.944994 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-02 01:12:33.945003 | orchestrator | 2025-09-02 01:12:33.945011 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-02 01:12:33.945020 | orchestrator | Tuesday 02 September 2025 01:06:56 +0000 (0:00:03.561) 0:03:07.483 ***** 2025-09-02 01:12:33.945029 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-02 01:12:33.945037 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-02 01:12:33.945046 | orchestrator | 2025-09-02 01:12:33.945055 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-02 01:12:33.945064 | orchestrator | Tuesday 02 September 2025 01:07:04 +0000 (0:00:07.717) 0:03:15.200 ***** 2025-09-02 01:12:33.945083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945167 | orchestrator | 2025-09-02 01:12:33.945176 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-02 01:12:33.945185 | orchestrator | Tuesday 02 September 2025 01:07:05 +0000 (0:00:01.414) 0:03:16.615 ***** 2025-09-02 01:12:33.945199 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.945208 | orchestrator | 2025-09-02 01:12:33.945217 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-02 01:12:33.945225 | orchestrator | Tuesday 02 September 2025 01:07:05 +0000 (0:00:00.139) 0:03:16.754 ***** 2025-09-02 01:12:33.945234 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.945243 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.945252 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.945260 | orchestrator | 2025-09-02 01:12:33.945269 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-02 01:12:33.945278 | orchestrator | Tuesday 02 September 2025 01:07:05 +0000 (0:00:00.293) 0:03:17.047 ***** 2025-09-02 01:12:33.945287 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-02 01:12:33.945295 | orchestrator | 2025-09-02 01:12:33.945304 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-02 01:12:33.945313 | orchestrator | Tuesday 02 September 2025 01:07:06 +0000 (0:00:00.890) 0:03:17.938 ***** 2025-09-02 01:12:33.945321 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.945330 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.945339 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.945348 | orchestrator | 2025-09-02 01:12:33.945356 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-02 01:12:33.945365 | orchestrator | Tuesday 02 September 2025 01:07:07 +0000 (0:00:00.302) 0:03:18.240 ***** 2025-09-02 01:12:33.945374 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:12:33.945383 | orchestrator | 2025-09-02 01:12:33.945391 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-02 01:12:33.945400 | orchestrator | Tuesday 02 September 2025 01:07:07 +0000 (0:00:00.551) 0:03:18.792 ***** 2025-09-02 01:12:33.945414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945511 | orchestrator | 2025-09-02 01:12:33.945520 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-02 01:12:33.945529 | orchestrator | Tuesday 02 September 2025 01:07:10 +0000 (0:00:02.640) 0:03:21.432 ***** 2025-09-02 01:12:33.945544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 01:12:33.945560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.945569 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.945578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 01:12:33.945592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.945601 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.945617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 01:12:33.945632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.945641 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.945650 | orchestrator | 2025-09-02 01:12:33.945659 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-02 01:12:33.945668 | orchestrator | Tuesday 02 September 2025 01:07:11 +0000 (0:00:00.825) 0:03:22.257 ***** 2025-09-02 01:12:33.945677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 01:12:33.945690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.945700 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.945715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 01:12:33.945730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.945739 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.945748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 01:12:33.945757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.945766 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.945775 | orchestrator | 2025-09-02 01:12:33.945784 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-02 01:12:33.945793 | orchestrator | Tuesday 02 September 2025 01:07:11 +0000 (0:00:00.808) 0:03:23.065 ***** 2025-09-02 01:12:33.945812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945891 | orchestrator | 2025-09-02 01:12:33.945899 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-02 01:12:33.945908 | orchestrator | Tuesday 02 September 2025 01:07:14 +0000 (0:00:02.390) 0:03:25.456 ***** 2025-09-02 01:12:33.945917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.945965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.945993 | orchestrator | 2025-09-02 01:12:33.946002 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-02 01:12:33.946010 | orchestrator | Tuesday 02 September 2025 01:07:20 +0000 (0:00:05.788) 0:03:31.244 ***** 2025-09-02 01:12:33.946053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 01:12:33.946073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.946083 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.946100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 01:12:33.946110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.946119 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.946129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-02 01:12:33.946142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.946157 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.946166 | orchestrator | 2025-09-02 01:12:33.946174 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-02 01:12:33.946183 | orchestrator | Tuesday 02 September 2025 01:07:20 +0000 (0:00:00.617) 0:03:31.862 ***** 2025-09-02 01:12:33.946192 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.946201 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:12:33.946210 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:12:33.946218 | orchestrator | 2025-09-02 01:12:33.946227 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-02 01:12:33.946236 | orchestrator | Tuesday 02 September 2025 01:07:22 +0000 (0:00:01.537) 0:03:33.399 ***** 2025-09-02 01:12:33.946245 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.946259 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.946268 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.946277 | orchestrator | 2025-09-02 01:12:33.946285 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-02 01:12:33.946294 | orchestrator | Tuesday 02 September 2025 01:07:22 +0000 (0:00:00.329) 0:03:33.729 ***** 2025-09-02 01:12:33.946304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.946314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.946333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-02 01:12:33.946348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler2025-09-02 01:12:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:12:33.951525 | orchestrator | :2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.951616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.951636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.951649 | orchestrator | 2025-09-02 01:12:33.951662 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-02 01:12:33.951675 | orchestrator | Tuesday 02 September 2025 01:07:24 +0000 (0:00:02.070) 0:03:35.799 ***** 2025-09-02 01:12:33.951686 | orchestrator | 2025-09-02 01:12:33.951697 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-02 01:12:33.951708 | orchestrator | Tuesday 02 September 2025 01:07:24 +0000 (0:00:00.132) 0:03:35.931 ***** 2025-09-02 01:12:33.951719 | orchestrator | 2025-09-02 01:12:33.951730 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-02 01:12:33.951740 | orchestrator | Tuesday 02 September 2025 01:07:24 +0000 (0:00:00.146) 0:03:36.078 ***** 2025-09-02 01:12:33.951773 | orchestrator | 2025-09-02 01:12:33.951785 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-02 01:12:33.951795 | orchestrator | Tuesday 02 September 2025 01:07:25 +0000 (0:00:00.147) 0:03:36.225 ***** 2025-09-02 01:12:33.951806 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.951818 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:12:33.951829 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:12:33.951840 | orchestrator | 2025-09-02 01:12:33.951850 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-02 01:12:33.951862 | orchestrator | Tuesday 02 September 2025 01:07:49 +0000 (0:00:24.250) 0:04:00.476 ***** 2025-09-02 01:12:33.951872 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.951883 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:12:33.951894 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:12:33.951904 | orchestrator | 2025-09-02 01:12:33.951915 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-02 01:12:33.951926 | orchestrator | 2025-09-02 01:12:33.951937 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-02 01:12:33.951948 | orchestrator | Tuesday 02 September 2025 01:08:00 +0000 (0:00:10.791) 0:04:11.267 ***** 2025-09-02 01:12:33.951959 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:12:33.951972 | orchestrator | 2025-09-02 01:12:33.951983 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-02 01:12:33.951993 | orchestrator | Tuesday 02 September 2025 01:08:01 +0000 (0:00:01.150) 0:04:12.418 ***** 2025-09-02 01:12:33.952004 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.952015 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.952026 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.952050 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.952064 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.952077 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.952089 | orchestrator | 2025-09-02 01:12:33.952102 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-02 01:12:33.952115 | orchestrator | Tuesday 02 September 2025 01:08:01 +0000 (0:00:00.598) 0:04:13.016 ***** 2025-09-02 01:12:33.952128 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.952140 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.952153 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.952166 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 01:12:33.952180 | orchestrator | 2025-09-02 01:12:33.952193 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-02 01:12:33.952205 | orchestrator | Tuesday 02 September 2025 01:08:02 +0000 (0:00:01.016) 0:04:14.032 ***** 2025-09-02 01:12:33.952218 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-02 01:12:33.952231 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-02 01:12:33.952244 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-02 01:12:33.952256 | orchestrator | 2025-09-02 01:12:33.952285 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-02 01:12:33.952298 | orchestrator | Tuesday 02 September 2025 01:08:03 +0000 (0:00:00.663) 0:04:14.696 ***** 2025-09-02 01:12:33.952311 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-02 01:12:33.952323 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-02 01:12:33.952336 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-02 01:12:33.952349 | orchestrator | 2025-09-02 01:12:33.952361 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-02 01:12:33.952375 | orchestrator | Tuesday 02 September 2025 01:08:04 +0000 (0:00:01.226) 0:04:15.923 ***** 2025-09-02 01:12:33.952387 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-02 01:12:33.952408 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.952419 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-02 01:12:33.952430 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.952458 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-02 01:12:33.952470 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.952481 | orchestrator | 2025-09-02 01:12:33.952491 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-02 01:12:33.952502 | orchestrator | Tuesday 02 September 2025 01:08:05 +0000 (0:00:00.796) 0:04:16.720 ***** 2025-09-02 01:12:33.952513 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-02 01:12:33.952524 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-02 01:12:33.952535 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.952546 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-02 01:12:33.952556 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-02 01:12:33.952567 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.952578 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-02 01:12:33.952589 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-02 01:12:33.952599 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.952610 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-02 01:12:33.952621 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-02 01:12:33.952632 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-02 01:12:33.952643 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-02 01:12:33.952654 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-02 01:12:33.952665 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-02 01:12:33.952675 | orchestrator | 2025-09-02 01:12:33.952686 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-02 01:12:33.952697 | orchestrator | Tuesday 02 September 2025 01:08:07 +0000 (0:00:02.026) 0:04:18.746 ***** 2025-09-02 01:12:33.952708 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.952718 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.952729 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.952740 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.952751 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.952761 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.952772 | orchestrator | 2025-09-02 01:12:33.952783 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-02 01:12:33.952793 | orchestrator | Tuesday 02 September 2025 01:08:09 +0000 (0:00:01.486) 0:04:20.232 ***** 2025-09-02 01:12:33.952804 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.952815 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.952825 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.952836 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.952847 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.952857 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.952868 | orchestrator | 2025-09-02 01:12:33.952879 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-02 01:12:33.952889 | orchestrator | Tuesday 02 September 2025 01:08:10 +0000 (0:00:01.559) 0:04:21.792 ***** 2025-09-02 01:12:33.952908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.952947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.952960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.952972 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.952984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.952995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953053 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953151 | orchestrator | 2025-09-02 01:12:33.953162 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-02 01:12:33.953173 | orchestrator | Tuesday 02 September 2025 01:08:13 +0000 (0:00:02.376) 0:04:24.168 ***** 2025-09-02 01:12:33.953185 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-02 01:12:33.953196 | orchestrator | 2025-09-02 01:12:33.953207 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-02 01:12:33.953218 | orchestrator | Tuesday 02 September 2025 01:08:14 +0000 (0:00:01.224) 0:04:25.392 ***** 2025-09-02 01:12:33.953230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953305 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.953472 | orchestrator | 2025-09-02 01:12:33.953483 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-02 01:12:33.953494 | orchestrator | Tuesday 02 September 2025 01:08:17 +0000 (0:00:03.678) 0:04:29.071 ***** 2025-09-02 01:12:33.953512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.953525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.953536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.953548 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.953559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.953583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.953601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.953613 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.953624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.953636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.953648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.953667 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.953678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-02 01:12:33.953694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.953705 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.953723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-02 01:12:33.953735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.953746 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.953757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-02 01:12:33.953768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.953785 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.953797 | orchestrator | 2025-09-02 01:12:33.953808 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-02 01:12:33.953819 | orchestrator | Tuesday 02 September 2025 01:08:19 +0000 (0:00:01.525) 0:04:30.597 ***** 2025-09-02 01:12:33.953831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.953847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.953866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.953878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.953889 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.953900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.953918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.953930 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.953945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.953958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.953976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.953987 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.953999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-02 01:12:33.954067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-02 01:12:33.954082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.954093 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.954105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.954116 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.954132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-02 01:12:33.954183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.954195 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.954206 | orchestrator | 2025-09-02 01:12:33.954217 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-02 01:12:33.954228 | orchestrator | Tuesday 02 September 2025 01:08:21 +0000 (0:00:02.172) 0:04:32.769 ***** 2025-09-02 01:12:33.954239 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.954250 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.954261 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.954272 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-02 01:12:33.954290 | orchestrator | 2025-09-02 01:12:33.954301 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-02 01:12:33.954312 | orchestrator | Tuesday 02 September 2025 01:08:22 +0000 (0:00:01.035) 0:04:33.805 ***** 2025-09-02 01:12:33.954322 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-02 01:12:33.954333 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-02 01:12:33.954344 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-02 01:12:33.954355 | orchestrator | 2025-09-02 01:12:33.954365 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-02 01:12:33.954376 | orchestrator | Tuesday 02 September 2025 01:08:23 +0000 (0:00:00.908) 0:04:34.713 ***** 2025-09-02 01:12:33.954387 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-02 01:12:33.954398 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-02 01:12:33.954408 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-02 01:12:33.954419 | orchestrator | 2025-09-02 01:12:33.954430 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-02 01:12:33.954484 | orchestrator | Tuesday 02 September 2025 01:08:24 +0000 (0:00:00.981) 0:04:35.694 ***** 2025-09-02 01:12:33.954498 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:12:33.954510 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:12:33.954520 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:12:33.954531 | orchestrator | 2025-09-02 01:12:33.954542 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-02 01:12:33.954553 | orchestrator | Tuesday 02 September 2025 01:08:25 +0000 (0:00:00.493) 0:04:36.187 ***** 2025-09-02 01:12:33.954564 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:12:33.954575 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:12:33.954586 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:12:33.954596 | orchestrator | 2025-09-02 01:12:33.954607 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-02 01:12:33.954618 | orchestrator | Tuesday 02 September 2025 01:08:25 +0000 (0:00:00.760) 0:04:36.948 ***** 2025-09-02 01:12:33.954629 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-02 01:12:33.954640 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-02 01:12:33.954651 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-02 01:12:33.954662 | orchestrator | 2025-09-02 01:12:33.954673 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-02 01:12:33.954684 | orchestrator | Tuesday 02 September 2025 01:08:27 +0000 (0:00:01.221) 0:04:38.169 ***** 2025-09-02 01:12:33.954694 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-02 01:12:33.954705 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-02 01:12:33.954716 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-02 01:12:33.954727 | orchestrator | 2025-09-02 01:12:33.954737 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-02 01:12:33.954748 | orchestrator | Tuesday 02 September 2025 01:08:28 +0000 (0:00:01.181) 0:04:39.350 ***** 2025-09-02 01:12:33.954759 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-02 01:12:33.954770 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-02 01:12:33.954781 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-02 01:12:33.954791 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-02 01:12:33.954802 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-02 01:12:33.954813 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-02 01:12:33.954824 | orchestrator | 2025-09-02 01:12:33.954834 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-02 01:12:33.954845 | orchestrator | Tuesday 02 September 2025 01:08:32 +0000 (0:00:03.793) 0:04:43.143 ***** 2025-09-02 01:12:33.954856 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.954867 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.954882 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.954900 | orchestrator | 2025-09-02 01:12:33.954911 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-02 01:12:33.954922 | orchestrator | Tuesday 02 September 2025 01:08:32 +0000 (0:00:00.531) 0:04:43.675 ***** 2025-09-02 01:12:33.954933 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.954944 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.954954 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.954965 | orchestrator | 2025-09-02 01:12:33.954976 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-02 01:12:33.954987 | orchestrator | Tuesday 02 September 2025 01:08:32 +0000 (0:00:00.337) 0:04:44.012 ***** 2025-09-02 01:12:33.954998 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.955008 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.955019 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.955030 | orchestrator | 2025-09-02 01:12:33.955040 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-02 01:12:33.955051 | orchestrator | Tuesday 02 September 2025 01:08:34 +0000 (0:00:01.236) 0:04:45.249 ***** 2025-09-02 01:12:33.955077 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-02 01:12:33.955090 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-02 01:12:33.955101 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-02 01:12:33.955112 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-02 01:12:33.955123 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-02 01:12:33.955134 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-02 01:12:33.955144 | orchestrator | 2025-09-02 01:12:33.955155 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-02 01:12:33.955166 | orchestrator | Tuesday 02 September 2025 01:08:37 +0000 (0:00:03.294) 0:04:48.543 ***** 2025-09-02 01:12:33.955177 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-02 01:12:33.955188 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-02 01:12:33.955199 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-02 01:12:33.955209 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-02 01:12:33.955220 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.955231 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-02 01:12:33.955241 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.955252 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-02 01:12:33.955263 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.955274 | orchestrator | 2025-09-02 01:12:33.955284 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-02 01:12:33.955295 | orchestrator | Tuesday 02 September 2025 01:08:41 +0000 (0:00:03.653) 0:04:52.197 ***** 2025-09-02 01:12:33.955306 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.955317 | orchestrator | 2025-09-02 01:12:33.955327 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-02 01:12:33.955338 | orchestrator | Tuesday 02 September 2025 01:08:41 +0000 (0:00:00.144) 0:04:52.342 ***** 2025-09-02 01:12:33.955349 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.955360 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.955371 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.955381 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.955392 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.955403 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.955420 | orchestrator | 2025-09-02 01:12:33.955431 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-02 01:12:33.955457 | orchestrator | Tuesday 02 September 2025 01:08:41 +0000 (0:00:00.588) 0:04:52.931 ***** 2025-09-02 01:12:33.955469 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-02 01:12:33.955480 | orchestrator | 2025-09-02 01:12:33.955490 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-02 01:12:33.955501 | orchestrator | Tuesday 02 September 2025 01:08:42 +0000 (0:00:00.707) 0:04:53.638 ***** 2025-09-02 01:12:33.955512 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.955523 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.955533 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.955544 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.955555 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.955565 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.955576 | orchestrator | 2025-09-02 01:12:33.955587 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-02 01:12:33.955597 | orchestrator | Tuesday 02 September 2025 01:08:43 +0000 (0:00:00.776) 0:04:54.415 ***** 2025-09-02 01:12:33.955617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955638 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955650 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955834 | orchestrator | 2025-09-02 01:12:33.955845 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-02 01:12:33.955856 | orchestrator | Tuesday 02 September 2025 01:08:47 +0000 (0:00:03.870) 0:04:58.285 ***** 2025-09-02 01:12:33.955868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.955886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.955898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.955914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.955933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.955945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.955962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.955990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.956002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.956020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.956031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.956048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.956060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.956071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.956083 | orchestrator | 2025-09-02 01:12:33.956094 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-02 01:12:33.956105 | orchestrator | Tuesday 02 September 2025 01:08:53 +0000 (0:00:06.158) 0:05:04.444 ***** 2025-09-02 01:12:33.956116 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.956127 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.956137 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.956148 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.956163 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.956174 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.956184 | orchestrator | 2025-09-02 01:12:33.956195 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-02 01:12:33.956206 | orchestrator | Tuesday 02 September 2025 01:08:54 +0000 (0:00:01.378) 0:05:05.823 ***** 2025-09-02 01:12:33.956217 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-02 01:12:33.956227 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-02 01:12:33.956238 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-02 01:12:33.956249 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-02 01:12:33.956260 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-02 01:12:33.956271 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-02 01:12:33.956282 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.956304 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-02 01:12:33.956315 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.956326 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-02 01:12:33.956337 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-02 01:12:33.956348 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.956359 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-02 01:12:33.956370 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-02 01:12:33.956381 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-02 01:12:33.956391 | orchestrator | 2025-09-02 01:12:33.956402 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-02 01:12:33.956413 | orchestrator | Tuesday 02 September 2025 01:08:58 +0000 (0:00:03.596) 0:05:09.419 ***** 2025-09-02 01:12:33.956424 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.956434 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.956458 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.956469 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.956480 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.956490 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.956501 | orchestrator | 2025-09-02 01:12:33.956512 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-02 01:12:33.956522 | orchestrator | Tuesday 02 September 2025 01:08:58 +0000 (0:00:00.573) 0:05:09.992 ***** 2025-09-02 01:12:33.956533 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-02 01:12:33.956544 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-02 01:12:33.956555 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-02 01:12:33.956566 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-02 01:12:33.956577 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-02 01:12:33.956587 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-02 01:12:33.956598 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-02 01:12:33.956608 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-02 01:12:33.956619 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-02 01:12:33.956630 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-02 01:12:33.956640 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.956651 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-02 01:12:33.956662 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.956672 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-02 01:12:33.956683 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.956694 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-02 01:12:33.956704 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-02 01:12:33.956721 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-02 01:12:33.956732 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-02 01:12:33.956747 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-02 01:12:33.956758 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-02 01:12:33.956769 | orchestrator | 2025-09-02 01:12:33.956780 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-02 01:12:33.956791 | orchestrator | Tuesday 02 September 2025 01:09:04 +0000 (0:00:05.284) 0:05:15.277 ***** 2025-09-02 01:12:33.956801 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-02 01:12:33.956812 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-02 01:12:33.956823 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-02 01:12:33.956833 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-02 01:12:33.956850 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-02 01:12:33.956861 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-02 01:12:33.956872 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-02 01:12:33.956883 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-02 01:12:33.956893 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-02 01:12:33.956904 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-02 01:12:33.956915 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-02 01:12:33.956926 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-02 01:12:33.956936 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-02 01:12:33.956947 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.956958 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-02 01:12:33.956968 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.956979 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-02 01:12:33.956990 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-02 01:12:33.957000 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.957011 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-02 01:12:33.957022 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-02 01:12:33.957032 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-02 01:12:33.957043 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-02 01:12:33.957054 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-02 01:12:33.957064 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-02 01:12:33.957075 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-02 01:12:33.957086 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-02 01:12:33.957096 | orchestrator | 2025-09-02 01:12:33.957107 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-02 01:12:33.957123 | orchestrator | Tuesday 02 September 2025 01:09:10 +0000 (0:00:06.825) 0:05:22.103 ***** 2025-09-02 01:12:33.957134 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.957145 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.957156 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.957167 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.957178 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.957188 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.957199 | orchestrator | 2025-09-02 01:12:33.957210 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-02 01:12:33.957220 | orchestrator | Tuesday 02 September 2025 01:09:11 +0000 (0:00:00.784) 0:05:22.887 ***** 2025-09-02 01:12:33.957231 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.957242 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.957253 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.957263 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.957274 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.957284 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.957295 | orchestrator | 2025-09-02 01:12:33.957306 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-02 01:12:33.957317 | orchestrator | Tuesday 02 September 2025 01:09:12 +0000 (0:00:00.665) 0:05:23.552 ***** 2025-09-02 01:12:33.957327 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.957338 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.957349 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.957359 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.957370 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.957380 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.957391 | orchestrator | 2025-09-02 01:12:33.957402 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-02 01:12:33.957413 | orchestrator | Tuesday 02 September 2025 01:09:14 +0000 (0:00:02.045) 0:05:25.598 ***** 2025-09-02 01:12:33.957432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.957465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.957478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.957496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.957508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.957519 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.957535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.957547 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.957565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-02 01:12:33.957577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-02 01:12:33.957594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.957606 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.957617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-02 01:12:33.957629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-02 01:12:33.957644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.957663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.957674 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.957685 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.957697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-02 01:12:33.957714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-02 01:12:33.957725 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.957736 | orchestrator | 2025-09-02 01:12:33.957747 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-02 01:12:33.957758 | orchestrator | Tuesday 02 September 2025 01:09:15 +0000 (0:00:01.417) 0:05:27.016 ***** 2025-09-02 01:12:33.957769 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-02 01:12:33.957780 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-02 01:12:33.957791 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.957802 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-02 01:12:33.957812 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-02 01:12:33.957823 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.957834 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-02 01:12:33.957845 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-02 01:12:33.957855 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.957866 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-02 01:12:33.957877 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-02 01:12:33.957887 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.957898 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-02 01:12:33.957909 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-02 01:12:33.957919 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.957930 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-02 01:12:33.957941 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-02 01:12:33.957952 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.957962 | orchestrator | 2025-09-02 01:12:33.957973 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-02 01:12:33.957984 | orchestrator | Tuesday 02 September 2025 01:09:16 +0000 (0:00:00.853) 0:05:27.869 ***** 2025-09-02 01:12:33.957999 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958090 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958102 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958188 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-02 01:12:33.958267 | orchestrator | 2025-09-02 01:12:33.958278 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-02 01:12:33.958289 | orchestrator | Tuesday 02 September 2025 01:09:19 +0000 (0:00:03.101) 0:05:30.971 ***** 2025-09-02 01:12:33.958300 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.958311 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.958321 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.958332 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.958343 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.958353 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.958364 | orchestrator | 2025-09-02 01:12:33.958375 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-02 01:12:33.958385 | orchestrator | Tuesday 02 September 2025 01:09:20 +0000 (0:00:00.776) 0:05:31.748 ***** 2025-09-02 01:12:33.958396 | orchestrator | 2025-09-02 01:12:33.958407 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-02 01:12:33.958418 | orchestrator | Tuesday 02 September 2025 01:09:20 +0000 (0:00:00.135) 0:05:31.884 ***** 2025-09-02 01:12:33.958429 | orchestrator | 2025-09-02 01:12:33.958484 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-02 01:12:33.958499 | orchestrator | Tuesday 02 September 2025 01:09:20 +0000 (0:00:00.130) 0:05:32.014 ***** 2025-09-02 01:12:33.958510 | orchestrator | 2025-09-02 01:12:33.958520 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-02 01:12:33.958530 | orchestrator | Tuesday 02 September 2025 01:09:21 +0000 (0:00:00.132) 0:05:32.146 ***** 2025-09-02 01:12:33.958539 | orchestrator | 2025-09-02 01:12:33.958549 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-02 01:12:33.958559 | orchestrator | Tuesday 02 September 2025 01:09:21 +0000 (0:00:00.138) 0:05:32.285 ***** 2025-09-02 01:12:33.958568 | orchestrator | 2025-09-02 01:12:33.958578 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-02 01:12:33.958587 | orchestrator | Tuesday 02 September 2025 01:09:21 +0000 (0:00:00.130) 0:05:32.416 ***** 2025-09-02 01:12:33.958597 | orchestrator | 2025-09-02 01:12:33.958606 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-02 01:12:33.958616 | orchestrator | Tuesday 02 September 2025 01:09:21 +0000 (0:00:00.320) 0:05:32.736 ***** 2025-09-02 01:12:33.958625 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:12:33.958635 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.958645 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:12:33.958654 | orchestrator | 2025-09-02 01:12:33.958664 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-02 01:12:33.958674 | orchestrator | Tuesday 02 September 2025 01:09:33 +0000 (0:00:12.083) 0:05:44.819 ***** 2025-09-02 01:12:33.958689 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.958698 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:12:33.958708 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:12:33.958718 | orchestrator | 2025-09-02 01:12:33.958727 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-02 01:12:33.958737 | orchestrator | Tuesday 02 September 2025 01:09:47 +0000 (0:00:13.754) 0:05:58.573 ***** 2025-09-02 01:12:33.958746 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.958756 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.958765 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.958775 | orchestrator | 2025-09-02 01:12:33.958784 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-02 01:12:33.958794 | orchestrator | Tuesday 02 September 2025 01:10:14 +0000 (0:00:26.875) 0:06:25.449 ***** 2025-09-02 01:12:33.958804 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.958813 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.958823 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.958832 | orchestrator | 2025-09-02 01:12:33.958846 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-02 01:12:33.958856 | orchestrator | Tuesday 02 September 2025 01:10:51 +0000 (0:00:37.284) 0:07:02.734 ***** 2025-09-02 01:12:33.958865 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-02 01:12:33.958875 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-02 01:12:33.958885 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-02 01:12:33.958894 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.958904 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.958913 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.958923 | orchestrator | 2025-09-02 01:12:33.958932 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-02 01:12:33.958942 | orchestrator | Tuesday 02 September 2025 01:10:57 +0000 (0:00:06.235) 0:07:08.970 ***** 2025-09-02 01:12:33.958952 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.958961 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.958971 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.958980 | orchestrator | 2025-09-02 01:12:33.958996 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-02 01:12:33.959005 | orchestrator | Tuesday 02 September 2025 01:10:58 +0000 (0:00:00.791) 0:07:09.762 ***** 2025-09-02 01:12:33.959015 | orchestrator | changed: [testbed-node-4] 2025-09-02 01:12:33.959025 | orchestrator | changed: [testbed-node-3] 2025-09-02 01:12:33.959034 | orchestrator | changed: [testbed-node-5] 2025-09-02 01:12:33.959044 | orchestrator | 2025-09-02 01:12:33.959053 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-02 01:12:33.959063 | orchestrator | Tuesday 02 September 2025 01:11:24 +0000 (0:00:25.981) 0:07:35.744 ***** 2025-09-02 01:12:33.959072 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.959082 | orchestrator | 2025-09-02 01:12:33.959091 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-02 01:12:33.959101 | orchestrator | Tuesday 02 September 2025 01:11:24 +0000 (0:00:00.130) 0:07:35.875 ***** 2025-09-02 01:12:33.959110 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.959120 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.959129 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.959139 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.959148 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.959158 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-02 01:12:33.959168 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 01:12:33.959184 | orchestrator | 2025-09-02 01:12:33.959193 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-02 01:12:33.959203 | orchestrator | Tuesday 02 September 2025 01:11:46 +0000 (0:00:22.189) 0:07:58.064 ***** 2025-09-02 01:12:33.959212 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.959222 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.959231 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.959241 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.959250 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.959260 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.959269 | orchestrator | 2025-09-02 01:12:33.959279 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-02 01:12:33.959288 | orchestrator | Tuesday 02 September 2025 01:11:55 +0000 (0:00:08.520) 0:08:06.584 ***** 2025-09-02 01:12:33.959298 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.959307 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.959317 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.959326 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.959336 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.959345 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-02 01:12:33.959355 | orchestrator | 2025-09-02 01:12:33.959364 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-02 01:12:33.959374 | orchestrator | Tuesday 02 September 2025 01:11:59 +0000 (0:00:03.642) 0:08:10.227 ***** 2025-09-02 01:12:33.959383 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 01:12:33.959393 | orchestrator | 2025-09-02 01:12:33.959402 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-02 01:12:33.959412 | orchestrator | Tuesday 02 September 2025 01:12:10 +0000 (0:00:11.748) 0:08:21.976 ***** 2025-09-02 01:12:33.959422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 01:12:33.959431 | orchestrator | 2025-09-02 01:12:33.959453 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-02 01:12:33.959463 | orchestrator | Tuesday 02 September 2025 01:12:12 +0000 (0:00:01.296) 0:08:23.272 ***** 2025-09-02 01:12:33.959472 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.959482 | orchestrator | 2025-09-02 01:12:33.959491 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-02 01:12:33.959501 | orchestrator | Tuesday 02 September 2025 01:12:13 +0000 (0:00:01.270) 0:08:24.543 ***** 2025-09-02 01:12:33.959510 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-02 01:12:33.959520 | orchestrator | 2025-09-02 01:12:33.959529 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-02 01:12:33.959539 | orchestrator | Tuesday 02 September 2025 01:12:24 +0000 (0:00:10.689) 0:08:35.233 ***** 2025-09-02 01:12:33.959548 | orchestrator | ok: [testbed-node-3] 2025-09-02 01:12:33.959558 | orchestrator | ok: [testbed-node-4] 2025-09-02 01:12:33.959568 | orchestrator | ok: [testbed-node-5] 2025-09-02 01:12:33.959577 | orchestrator | ok: [testbed-node-0] 2025-09-02 01:12:33.959587 | orchestrator | ok: [testbed-node-1] 2025-09-02 01:12:33.959596 | orchestrator | ok: [testbed-node-2] 2025-09-02 01:12:33.959605 | orchestrator | 2025-09-02 01:12:33.959615 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-02 01:12:33.959624 | orchestrator | 2025-09-02 01:12:33.959641 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-02 01:12:33.959651 | orchestrator | Tuesday 02 September 2025 01:12:25 +0000 (0:00:01.734) 0:08:36.967 ***** 2025-09-02 01:12:33.959661 | orchestrator | changed: [testbed-node-0] 2025-09-02 01:12:33.959670 | orchestrator | changed: [testbed-node-1] 2025-09-02 01:12:33.959680 | orchestrator | changed: [testbed-node-2] 2025-09-02 01:12:33.959689 | orchestrator | 2025-09-02 01:12:33.959699 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-02 01:12:33.959708 | orchestrator | 2025-09-02 01:12:33.959723 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-02 01:12:33.959733 | orchestrator | Tuesday 02 September 2025 01:12:26 +0000 (0:00:01.145) 0:08:38.113 ***** 2025-09-02 01:12:33.959742 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.959752 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.959761 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.959771 | orchestrator | 2025-09-02 01:12:33.959780 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-02 01:12:33.959790 | orchestrator | 2025-09-02 01:12:33.959800 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-02 01:12:33.959809 | orchestrator | Tuesday 02 September 2025 01:12:27 +0000 (0:00:00.508) 0:08:38.622 ***** 2025-09-02 01:12:33.959824 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-02 01:12:33.959834 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-02 01:12:33.959844 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-02 01:12:33.959853 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-02 01:12:33.959863 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-02 01:12:33.959872 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-02 01:12:33.959882 | orchestrator | skipping: [testbed-node-3] 2025-09-02 01:12:33.959891 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-02 01:12:33.959901 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-02 01:12:33.959910 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-02 01:12:33.959920 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-02 01:12:33.959929 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-02 01:12:33.959939 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-02 01:12:33.959948 | orchestrator | skipping: [testbed-node-4] 2025-09-02 01:12:33.959958 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-02 01:12:33.959968 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-02 01:12:33.959977 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-02 01:12:33.959987 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-02 01:12:33.959996 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-02 01:12:33.960006 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-02 01:12:33.960015 | orchestrator | skipping: [testbed-node-5] 2025-09-02 01:12:33.960025 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-02 01:12:33.960034 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-02 01:12:33.960044 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-02 01:12:33.960053 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-02 01:12:33.960063 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-02 01:12:33.960072 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-02 01:12:33.960082 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.960091 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-02 01:12:33.960101 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-02 01:12:33.960110 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-02 01:12:33.960120 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-02 01:12:33.960129 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-02 01:12:33.960139 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-02 01:12:33.960149 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.960158 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-02 01:12:33.960168 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-02 01:12:33.960183 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-02 01:12:33.960193 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-02 01:12:33.960202 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-02 01:12:33.960212 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-02 01:12:33.960221 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.960231 | orchestrator | 2025-09-02 01:12:33.960240 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-02 01:12:33.960250 | orchestrator | 2025-09-02 01:12:33.960259 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-02 01:12:33.960269 | orchestrator | Tuesday 02 September 2025 01:12:28 +0000 (0:00:01.411) 0:08:40.034 ***** 2025-09-02 01:12:33.960278 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-02 01:12:33.960288 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-02 01:12:33.960298 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.960307 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-02 01:12:33.960317 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-02 01:12:33.960326 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.960336 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-02 01:12:33.960345 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-02 01:12:33.960358 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.960368 | orchestrator | 2025-09-02 01:12:33.960378 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-02 01:12:33.960387 | orchestrator | 2025-09-02 01:12:33.960397 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-02 01:12:33.960406 | orchestrator | Tuesday 02 September 2025 01:12:29 +0000 (0:00:00.737) 0:08:40.771 ***** 2025-09-02 01:12:33.960416 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.960425 | orchestrator | 2025-09-02 01:12:33.960435 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-02 01:12:33.960456 | orchestrator | 2025-09-02 01:12:33.960466 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-02 01:12:33.960476 | orchestrator | Tuesday 02 September 2025 01:12:30 +0000 (0:00:00.692) 0:08:41.464 ***** 2025-09-02 01:12:33.960485 | orchestrator | skipping: [testbed-node-0] 2025-09-02 01:12:33.960495 | orchestrator | skipping: [testbed-node-1] 2025-09-02 01:12:33.960504 | orchestrator | skipping: [testbed-node-2] 2025-09-02 01:12:33.960514 | orchestrator | 2025-09-02 01:12:33.960523 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-02 01:12:33.960538 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-02 01:12:33.960549 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-02 01:12:33.960559 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-02 01:12:33.960569 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-02 01:12:33.960579 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-02 01:12:33.960588 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-02 01:12:33.960598 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-02 01:12:33.960613 | orchestrator | 2025-09-02 01:12:33.960623 | orchestrator | 2025-09-02 01:12:33.960633 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-02 01:12:33.960643 | orchestrator | Tuesday 02 September 2025 01:12:30 +0000 (0:00:00.441) 0:08:41.906 ***** 2025-09-02 01:12:33.960652 | orchestrator | =============================================================================== 2025-09-02 01:12:33.960662 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.28s 2025-09-02 01:12:33.960671 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.07s 2025-09-02 01:12:33.960681 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.88s 2025-09-02 01:12:33.960690 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.98s 2025-09-02 01:12:33.960700 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.25s 2025-09-02 01:12:33.960709 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.19s 2025-09-02 01:12:33.960719 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.61s 2025-09-02 01:12:33.960728 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.83s 2025-09-02 01:12:33.960738 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.54s 2025-09-02 01:12:33.960747 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.75s 2025-09-02 01:12:33.960757 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.44s 2025-09-02 01:12:33.960766 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.08s 2025-09-02 01:12:33.960776 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.05s 2025-09-02 01:12:33.960785 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.85s 2025-09-02 01:12:33.960794 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.75s 2025-09-02 01:12:33.960804 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.79s 2025-09-02 01:12:33.960813 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.69s 2025-09-02 01:12:33.960823 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.98s 2025-09-02 01:12:33.960832 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.52s 2025-09-02 01:12:33.960842 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.72s 2025-09-02 01:12:36.986174 | orchestrator | 2025-09-02 01:12:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:12:40.036755 | orchestrator | 2025-09-02 01:12:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:12:43.074997 | orchestrator | 2025-09-02 01:12:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:12:46.107291 | orchestrator | 2025-09-02 01:12:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:12:49.153393 | orchestrator | 2025-09-02 01:12:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:12:52.201451 | orchestrator | 2025-09-02 01:12:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:12:55.248592 | orchestrator | 2025-09-02 01:12:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:12:58.291095 | orchestrator | 2025-09-02 01:12:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:01.335761 | orchestrator | 2025-09-02 01:13:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:04.378428 | orchestrator | 2025-09-02 01:13:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:07.421106 | orchestrator | 2025-09-02 01:13:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:10.456866 | orchestrator | 2025-09-02 01:13:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:13.501407 | orchestrator | 2025-09-02 01:13:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:16.530410 | orchestrator | 2025-09-02 01:13:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:19.574195 | orchestrator | 2025-09-02 01:13:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:22.621380 | orchestrator | 2025-09-02 01:13:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:25.664480 | orchestrator | 2025-09-02 01:13:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:28.707147 | orchestrator | 2025-09-02 01:13:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:31.751226 | orchestrator | 2025-09-02 01:13:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-02 01:13:34.798474 | orchestrator | 2025-09-02 01:13:35.120841 | orchestrator | 2025-09-02 01:13:35.125320 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Sep 2 01:13:35 UTC 2025 2025-09-02 01:13:35.125347 | orchestrator | 2025-09-02 01:13:35.418423 | orchestrator | ok: Runtime: 0:35:32.068683 2025-09-02 01:13:35.670454 | 2025-09-02 01:13:35.670622 | TASK [Bootstrap services] 2025-09-02 01:13:36.402187 | orchestrator | 2025-09-02 01:13:36.402422 | orchestrator | # BOOTSTRAP 2025-09-02 01:13:36.402446 | orchestrator | 2025-09-02 01:13:36.402461 | orchestrator | + set -e 2025-09-02 01:13:36.402474 | orchestrator | + echo 2025-09-02 01:13:36.402490 | orchestrator | + echo '# BOOTSTRAP' 2025-09-02 01:13:36.402508 | orchestrator | + echo 2025-09-02 01:13:36.402558 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-02 01:13:36.408923 | orchestrator | + set -e 2025-09-02 01:13:36.408953 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-02 01:13:40.940303 | orchestrator | 2025-09-02 01:13:40 | INFO  | It takes a moment until task 7a8f4ba0-a66b-46b7-ab84-0cc730f31996 (flavor-manager) has been started and output is visible here. 2025-09-02 01:13:44.542894 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-02 01:13:44.542946 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:179 │ 2025-09-02 01:13:44.542954 | orchestrator | │ in run │ 2025-09-02 01:13:44.542958 | orchestrator | │ │ 2025-09-02 01:13:44.542961 | orchestrator | │ 176 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-02 01:13:44.542970 | orchestrator | │ 177 │ │ 2025-09-02 01:13:44.542973 | orchestrator | │ 178 │ definitions = get_flavor_definitions(name, url) │ 2025-09-02 01:13:44.542977 | orchestrator | │ ❱ 179 │ manager = FlavorManager( │ 2025-09-02 01:13:44.542980 | orchestrator | │ 180 │ │ cloud=Cloud(cloud), definitions=definitions, recommended=recom │ 2025-09-02 01:13:44.542984 | orchestrator | │ 181 │ ) │ 2025-09-02 01:13:44.542987 | orchestrator | │ 182 │ manager.run() │ 2025-09-02 01:13:44.542990 | orchestrator | │ │ 2025-09-02 01:13:44.542994 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-02 01:13:44.543001 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-02 01:13:44.543004 | orchestrator | │ │ debug = False │ │ 2025-09-02 01:13:44.543007 | orchestrator | │ │ definitions = { │ │ 2025-09-02 01:13:44.543010 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-02 01:13:44.543013 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-02 01:13:44.543016 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-02 01:13:44.543020 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-02 01:13:44.543023 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-02 01:13:44.543026 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-02 01:13:44.543029 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-02 01:13:44.543032 | orchestrator | │ │ │ ], │ │ 2025-09-02 01:13:44.543035 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-02 01:13:44.543039 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.543042 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-02 01:13:44.543055 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.543059 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-02 01:13:44.543062 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.543065 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-02 01:13:44.543068 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.543071 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-02 01:13:44.543074 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-02 01:13:44.543077 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.543081 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.543084 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.543087 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-02 01:13:44.543090 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.543093 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-02 01:13:44.543096 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-02 01:13:44.543099 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-02 01:13:44.543111 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.543115 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-02 01:13:44.543118 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-02 01:13:44.543121 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.543124 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.543127 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.543130 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-02 01:13:44.543135 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.543139 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-02 01:13:44.543142 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.543145 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.543148 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.543152 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-02 01:13:44.543155 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-02 01:13:44.543158 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.543161 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.543164 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.543167 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-02 01:13:44.543170 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.543177 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-02 01:13:44.543180 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-02 01:13:44.543183 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.543186 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.543189 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-02 01:13:44.543192 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-02 01:13:44.543195 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.543198 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.543201 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.543205 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-02 01:13:44.543208 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.543211 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-02 01:13:44.543214 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.543217 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.543220 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.543223 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-02 01:13:44.543226 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-02 01:13:44.543229 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.543232 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.543236 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.543239 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-02 01:13:44.543242 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.543247 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-02 01:13:44.543250 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-02 01:13:44.543256 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.572807 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.572814 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-02 01:13:44.572818 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-02 01:13:44.572821 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.572824 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.572827 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.572831 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-02 01:13:44.572834 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.572840 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-02 01:13:44.572843 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.572846 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.572850 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.572853 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-02 01:13:44.572856 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-02 01:13:44.572859 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.572862 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.572865 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.572868 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-02 01:13:44.572872 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.572875 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-02 01:13:44.572878 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-02 01:13:44.572881 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.572884 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.572887 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-02 01:13:44.572890 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-02 01:13:44.572894 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.572897 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.572900 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.572903 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-02 01:13:44.572906 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-02 01:13:44.572910 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-02 01:13:44.572913 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.572916 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.572919 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.572922 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-02 01:13:44.572925 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-02 01:13:44.572928 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.572933 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.572937 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.572940 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-02 01:13:44.572943 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-02 01:13:44.572949 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-02 01:13:44.572952 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-02 01:13:44.572958 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.572961 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.572965 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-02 01:13:44.572968 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-02 01:13:44.572971 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.572974 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.572977 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-02 01:13:44.572980 | orchestrator | │ │ │ ] │ │ 2025-09-02 01:13:44.572983 | orchestrator | │ │ } │ │ 2025-09-02 01:13:44.572986 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-02 01:13:44.572990 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-02 01:13:44.572993 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-02 01:13:44.572996 | orchestrator | │ │ name = 'local' │ │ 2025-09-02 01:13:44.572999 | orchestrator | │ │ recommended = True │ │ 2025-09-02 01:13:44.573002 | orchestrator | │ │ url = None │ │ 2025-09-02 01:13:44.573005 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-02 01:13:44.573010 | orchestrator | │ │ 2025-09-02 01:13:44.573013 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:97 │ 2025-09-02 01:13:44.573016 | orchestrator | │ in __init__ │ 2025-09-02 01:13:44.573019 | orchestrator | │ │ 2025-09-02 01:13:44.573023 | orchestrator | │ 94 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-02 01:13:44.573026 | orchestrator | │ 95 │ │ self.cloud = cloud │ 2025-09-02 01:13:44.573029 | orchestrator | │ 96 │ │ if recommended: │ 2025-09-02 01:13:44.573032 | orchestrator | │ ❱ 97 │ │ │ self.required_flavors = self.required_flavors + definition │ 2025-09-02 01:13:44.573035 | orchestrator | │ 98 │ │ │ 2025-09-02 01:13:44.573038 | orchestrator | │ 99 │ │ self.defaults_dict = {} │ 2025-09-02 01:13:44.573041 | orchestrator | │ 100 │ │ for item in definitions["reference"]: │ 2025-09-02 01:13:44.573044 | orchestrator | │ │ 2025-09-02 01:13:44.573049 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-02 01:13:44.573053 | orchestrator | │ │ cloud = │ │ 2025-09-02 01:13:44.573061 | orchestrator | │ │ definitions = { │ │ 2025-09-02 01:13:44.573064 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-02 01:13:44.573067 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-02 01:13:44.573071 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-02 01:13:44.573074 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-02 01:13:44.573077 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-02 01:13:44.573080 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-02 01:13:44.573083 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-02 01:13:44.573086 | orchestrator | │ │ │ ], │ │ 2025-09-02 01:13:44.573090 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-02 01:13:44.573093 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.573098 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-02 01:13:44.593619 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.593626 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-02 01:13:44.593629 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.593633 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-02 01:13:44.593636 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.593639 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-02 01:13:44.593642 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-02 01:13:44.593645 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.593649 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.593652 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.593655 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-02 01:13:44.593658 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.593661 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-02 01:13:44.593664 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-02 01:13:44.593667 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-02 01:13:44.593670 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.593673 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-02 01:13:44.593677 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-02 01:13:44.593680 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.593683 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.593689 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.593693 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-02 01:13:44.593696 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.593699 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-02 01:13:44.593702 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.593705 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.593708 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.593711 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-02 01:13:44.593715 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-02 01:13:44.593718 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.593723 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.593726 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.593729 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-02 01:13:44.593732 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.593735 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-02 01:13:44.593739 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-02 01:13:44.593742 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.593745 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.593748 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-02 01:13:44.593751 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-02 01:13:44.593754 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.593757 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.593760 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.593767 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-02 01:13:44.593770 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.593773 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-02 01:13:44.593776 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.593779 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.593783 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.593786 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-02 01:13:44.593789 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-02 01:13:44.593792 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.593795 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.593800 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.593804 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-02 01:13:44.593807 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.593810 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-02 01:13:44.593813 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-02 01:13:44.593816 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.593819 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.593823 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-02 01:13:44.593826 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-02 01:13:44.593829 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.593832 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.593835 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.593838 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-02 01:13:44.593841 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.593844 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-02 01:13:44.593848 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.593851 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.593854 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.593857 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-02 01:13:44.593860 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-02 01:13:44.593863 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.593867 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.593870 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.593873 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-02 01:13:44.593876 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-02 01:13:44.593879 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-02 01:13:44.593882 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-02 01:13:44.593886 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.593889 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.593892 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-02 01:13:44.593895 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-02 01:13:44.593898 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.593901 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.593908 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.645604 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-02 01:13:44.645614 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-02 01:13:44.645617 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-02 01:13:44.645621 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-02 01:13:44.645624 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.645627 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.645630 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-02 01:13:44.645633 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-02 01:13:44.645636 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.645639 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.645642 | orchestrator | │ │ │ │ { │ │ 2025-09-02 01:13:44.645645 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-02 01:13:44.645649 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-02 01:13:44.645652 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-02 01:13:44.645655 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-02 01:13:44.645658 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-02 01:13:44.645661 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-02 01:13:44.645664 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-02 01:13:44.645667 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-02 01:13:44.645670 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-02 01:13:44.645673 | orchestrator | │ │ │ │ }, │ │ 2025-09-02 01:13:44.645677 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-02 01:13:44.645680 | orchestrator | │ │ │ ] │ │ 2025-09-02 01:13:44.645683 | orchestrator | │ │ } │ │ 2025-09-02 01:13:44.645686 | orchestrator | │ │ recommended = True │ │ 2025-09-02 01:13:44.645689 | orchestrator | │ │ self = │ │ 2025-09-02 01:13:44.645696 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-02 01:13:44.645699 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-02 01:13:44.645703 | orchestrator | KeyError: 'recommended' 2025-09-02 01:13:45.214949 | orchestrator | ERROR 2025-09-02 01:13:45.215325 | orchestrator | { 2025-09-02 01:13:45.215409 | orchestrator | "delta": "0:00:08.912052", 2025-09-02 01:13:45.215464 | orchestrator | "end": "2025-09-02 01:13:44.936570", 2025-09-02 01:13:45.215511 | orchestrator | "msg": "non-zero return code", 2025-09-02 01:13:45.215554 | orchestrator | "rc": 1, 2025-09-02 01:13:45.215595 | orchestrator | "start": "2025-09-02 01:13:36.024518" 2025-09-02 01:13:45.215658 | orchestrator | } failure 2025-09-02 01:13:45.233836 | 2025-09-02 01:13:45.234016 | PLAY RECAP 2025-09-02 01:13:45.234144 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-02 01:13:45.234210 | 2025-09-02 01:13:45.453216 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-02 01:13:45.454280 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-02 01:13:46.181228 | 2025-09-02 01:13:46.181385 | PLAY [Post output play] 2025-09-02 01:13:46.197001 | 2025-09-02 01:13:46.197127 | LOOP [stage-output : Register sources] 2025-09-02 01:13:46.268408 | 2025-09-02 01:13:46.268729 | TASK [stage-output : Check sudo] 2025-09-02 01:13:47.119152 | orchestrator | sudo: a password is required 2025-09-02 01:13:47.306973 | orchestrator | ok: Runtime: 0:00:00.014276 2025-09-02 01:13:47.321828 | 2025-09-02 01:13:47.321992 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-02 01:13:47.364301 | 2025-09-02 01:13:47.364611 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-02 01:13:47.434174 | orchestrator | ok 2025-09-02 01:13:47.443504 | 2025-09-02 01:13:47.443683 | LOOP [stage-output : Ensure target folders exist] 2025-09-02 01:13:47.882614 | orchestrator | ok: "docs" 2025-09-02 01:13:47.882896 | 2025-09-02 01:13:48.109666 | orchestrator | ok: "artifacts" 2025-09-02 01:13:48.317260 | orchestrator | ok: "logs" 2025-09-02 01:13:48.336995 | 2025-09-02 01:13:48.337173 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-02 01:13:48.373840 | 2025-09-02 01:13:48.374092 | TASK [stage-output : Make all log files readable] 2025-09-02 01:13:48.638358 | orchestrator | ok 2025-09-02 01:13:48.650248 | 2025-09-02 01:13:48.650402 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-02 01:13:48.685145 | orchestrator | skipping: Conditional result was False 2025-09-02 01:13:48.698227 | 2025-09-02 01:13:48.698371 | TASK [stage-output : Discover log files for compression] 2025-09-02 01:13:48.722446 | orchestrator | skipping: Conditional result was False 2025-09-02 01:13:48.738544 | 2025-09-02 01:13:48.738751 | LOOP [stage-output : Archive everything from logs] 2025-09-02 01:13:48.784737 | 2025-09-02 01:13:48.784910 | PLAY [Post cleanup play] 2025-09-02 01:13:48.794266 | 2025-09-02 01:13:48.794375 | TASK [Set cloud fact (Zuul deployment)] 2025-09-02 01:13:48.850003 | orchestrator | ok 2025-09-02 01:13:48.860403 | 2025-09-02 01:13:48.860510 | TASK [Set cloud fact (local deployment)] 2025-09-02 01:13:48.883832 | orchestrator | skipping: Conditional result was False 2025-09-02 01:13:48.899682 | 2025-09-02 01:13:48.899839 | TASK [Clean the cloud environment] 2025-09-02 01:13:49.437063 | orchestrator | 2025-09-02 01:13:49 - clean up servers 2025-09-02 01:13:50.169941 | orchestrator | 2025-09-02 01:13:50 - testbed-manager 2025-09-02 01:13:50.253395 | orchestrator | 2025-09-02 01:13:50 - testbed-node-2 2025-09-02 01:13:50.346165 | orchestrator | 2025-09-02 01:13:50 - testbed-node-3 2025-09-02 01:13:50.436466 | orchestrator | 2025-09-02 01:13:50 - testbed-node-4 2025-09-02 01:13:50.528952 | orchestrator | 2025-09-02 01:13:50 - testbed-node-1 2025-09-02 01:13:50.626168 | orchestrator | 2025-09-02 01:13:50 - testbed-node-5 2025-09-02 01:13:50.715006 | orchestrator | 2025-09-02 01:13:50 - testbed-node-0 2025-09-02 01:13:50.811463 | orchestrator | 2025-09-02 01:13:50 - clean up keypairs 2025-09-02 01:13:50.830453 | orchestrator | 2025-09-02 01:13:50 - testbed 2025-09-02 01:13:50.857248 | orchestrator | 2025-09-02 01:13:50 - wait for servers to be gone 2025-09-02 01:14:01.700817 | orchestrator | 2025-09-02 01:14:01 - clean up ports 2025-09-02 01:14:01.908527 | orchestrator | 2025-09-02 01:14:01 - 4b831de3-e189-4f08-b479-8a4e7aee17ba 2025-09-02 01:14:02.164761 | orchestrator | 2025-09-02 01:14:02 - 5c86c7dd-e297-46d8-96d1-012aa46c8532 2025-09-02 01:14:02.456635 | orchestrator | 2025-09-02 01:14:02 - 9dace14b-6ef8-4dc4-a39f-d4e37a96e4a8 2025-09-02 01:14:02.699962 | orchestrator | 2025-09-02 01:14:02 - c323becb-e653-4fe3-8e8e-5268518ad6f8 2025-09-02 01:14:02.924282 | orchestrator | 2025-09-02 01:14:02 - ccecfe80-df93-4849-9d04-f8a095c13abf 2025-09-02 01:14:03.140596 | orchestrator | 2025-09-02 01:14:03 - d6321341-0d97-4993-a898-37069b63a847 2025-09-02 01:14:03.347093 | orchestrator | 2025-09-02 01:14:03 - f8473465-2807-43ad-a6b3-57ec0486f736 2025-09-02 01:14:03.768833 | orchestrator | 2025-09-02 01:14:03 - clean up volumes 2025-09-02 01:14:03.881126 | orchestrator | 2025-09-02 01:14:03 - testbed-volume-3-node-base 2025-09-02 01:14:03.918802 | orchestrator | 2025-09-02 01:14:03 - testbed-volume-1-node-base 2025-09-02 01:14:03.968043 | orchestrator | 2025-09-02 01:14:03 - testbed-volume-5-node-base 2025-09-02 01:14:04.010266 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-4-node-base 2025-09-02 01:14:04.056889 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-0-node-base 2025-09-02 01:14:04.103188 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-2-node-base 2025-09-02 01:14:04.146378 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-manager-base 2025-09-02 01:14:04.190287 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-7-node-4 2025-09-02 01:14:04.236089 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-2-node-5 2025-09-02 01:14:04.280117 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-3-node-3 2025-09-02 01:14:04.321580 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-8-node-5 2025-09-02 01:14:04.360923 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-1-node-4 2025-09-02 01:14:04.401384 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-6-node-3 2025-09-02 01:14:04.444287 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-5-node-5 2025-09-02 01:14:04.483193 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-0-node-3 2025-09-02 01:14:04.532158 | orchestrator | 2025-09-02 01:14:04 - testbed-volume-4-node-4 2025-09-02 01:14:04.574340 | orchestrator | 2025-09-02 01:14:04 - disconnect routers 2025-09-02 01:14:04.711678 | orchestrator | 2025-09-02 01:14:04 - testbed 2025-09-02 01:14:05.760398 | orchestrator | 2025-09-02 01:14:05 - clean up subnets 2025-09-02 01:14:05.797080 | orchestrator | 2025-09-02 01:14:05 - subnet-testbed-management 2025-09-02 01:14:05.953435 | orchestrator | 2025-09-02 01:14:05 - clean up networks 2025-09-02 01:14:06.124850 | orchestrator | 2025-09-02 01:14:06 - net-testbed-management 2025-09-02 01:14:06.402092 | orchestrator | 2025-09-02 01:14:06 - clean up security groups 2025-09-02 01:14:06.441891 | orchestrator | 2025-09-02 01:14:06 - testbed-node 2025-09-02 01:14:06.548133 | orchestrator | 2025-09-02 01:14:06 - testbed-management 2025-09-02 01:14:06.653174 | orchestrator | 2025-09-02 01:14:06 - clean up floating ips 2025-09-02 01:14:06.687745 | orchestrator | 2025-09-02 01:14:06 - 81.163.193.185 2025-09-02 01:14:07.029991 | orchestrator | 2025-09-02 01:14:07 - clean up routers 2025-09-02 01:14:07.126295 | orchestrator | 2025-09-02 01:14:07 - testbed 2025-09-02 01:14:07.986002 | orchestrator | ok: Runtime: 0:00:18.742226 2025-09-02 01:14:07.989485 | 2025-09-02 01:14:07.989609 | PLAY RECAP 2025-09-02 01:14:07.989711 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-02 01:14:07.989748 | 2025-09-02 01:14:08.115379 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-02 01:14:08.117836 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-02 01:14:08.826251 | 2025-09-02 01:14:08.826400 | PLAY [Cleanup play] 2025-09-02 01:14:08.841895 | 2025-09-02 01:14:08.842018 | TASK [Set cloud fact (Zuul deployment)] 2025-09-02 01:14:08.894896 | orchestrator | ok 2025-09-02 01:14:08.902957 | 2025-09-02 01:14:08.903085 | TASK [Set cloud fact (local deployment)] 2025-09-02 01:14:08.936921 | orchestrator | skipping: Conditional result was False 2025-09-02 01:14:08.954827 | 2025-09-02 01:14:08.955021 | TASK [Clean the cloud environment] 2025-09-02 01:14:10.107610 | orchestrator | 2025-09-02 01:14:10 - clean up servers 2025-09-02 01:14:10.576435 | orchestrator | 2025-09-02 01:14:10 - clean up keypairs 2025-09-02 01:14:10.593650 | orchestrator | 2025-09-02 01:14:10 - wait for servers to be gone 2025-09-02 01:14:10.641714 | orchestrator | 2025-09-02 01:14:10 - clean up ports 2025-09-02 01:14:10.713628 | orchestrator | 2025-09-02 01:14:10 - clean up volumes 2025-09-02 01:14:10.776432 | orchestrator | 2025-09-02 01:14:10 - disconnect routers 2025-09-02 01:14:10.807245 | orchestrator | 2025-09-02 01:14:10 - clean up subnets 2025-09-02 01:14:10.825023 | orchestrator | 2025-09-02 01:14:10 - clean up networks 2025-09-02 01:14:10.946903 | orchestrator | 2025-09-02 01:14:10 - clean up security groups 2025-09-02 01:14:10.976734 | orchestrator | 2025-09-02 01:14:10 - clean up floating ips 2025-09-02 01:14:11.484077 | orchestrator | 2025-09-02 01:14:11 - clean up routers 2025-09-02 01:14:11.995347 | orchestrator | ok: Runtime: 0:00:01.770510 2025-09-02 01:14:11.999255 | 2025-09-02 01:14:11.999448 | PLAY RECAP 2025-09-02 01:14:11.999579 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-02 01:14:11.999685 | 2025-09-02 01:14:12.122941 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-02 01:14:12.125348 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-02 01:14:12.862247 | 2025-09-02 01:14:12.862402 | PLAY [Base post-fetch] 2025-09-02 01:14:12.877709 | 2025-09-02 01:14:12.877837 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-02 01:14:12.926057 | orchestrator | skipping: Conditional result was False 2025-09-02 01:14:12.939679 | 2025-09-02 01:14:12.939863 | TASK [fetch-output : Set log path for single node] 2025-09-02 01:14:12.975452 | orchestrator | ok 2025-09-02 01:14:12.983718 | 2025-09-02 01:14:12.984505 | LOOP [fetch-output : Ensure local output dirs] 2025-09-02 01:14:13.460189 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0258edba1581438491dbc4abeb4bfa2c/work/logs" 2025-09-02 01:14:13.735036 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0258edba1581438491dbc4abeb4bfa2c/work/artifacts" 2025-09-02 01:14:14.003557 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0258edba1581438491dbc4abeb4bfa2c/work/docs" 2025-09-02 01:14:14.027484 | 2025-09-02 01:14:14.027706 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-02 01:14:14.975880 | orchestrator | changed: .d..t...... ./ 2025-09-02 01:14:14.976187 | orchestrator | changed: All items complete 2025-09-02 01:14:14.976246 | 2025-09-02 01:14:15.680299 | orchestrator | changed: .d..t...... ./ 2025-09-02 01:14:16.438553 | orchestrator | changed: .d..t...... ./ 2025-09-02 01:14:16.467357 | 2025-09-02 01:14:16.467502 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-02 01:14:16.503614 | orchestrator | skipping: Conditional result was False 2025-09-02 01:14:16.506191 | orchestrator | skipping: Conditional result was False 2025-09-02 01:14:16.532104 | 2025-09-02 01:14:16.532226 | PLAY RECAP 2025-09-02 01:14:16.532307 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-02 01:14:16.532351 | 2025-09-02 01:14:16.659483 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-02 01:14:16.661948 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-02 01:14:17.385810 | 2025-09-02 01:14:17.385975 | PLAY [Base post] 2025-09-02 01:14:17.400370 | 2025-09-02 01:14:17.400512 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-02 01:14:18.372008 | orchestrator | changed 2025-09-02 01:14:18.383016 | 2025-09-02 01:14:18.383165 | PLAY RECAP 2025-09-02 01:14:18.383248 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-02 01:14:18.383329 | 2025-09-02 01:14:18.511250 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-02 01:14:18.513952 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-02 01:14:19.298380 | 2025-09-02 01:14:19.298549 | PLAY [Base post-logs] 2025-09-02 01:14:19.309049 | 2025-09-02 01:14:19.309180 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-02 01:14:19.763913 | localhost | changed 2025-09-02 01:14:19.777411 | 2025-09-02 01:14:19.777561 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-02 01:14:19.808064 | localhost | ok 2025-09-02 01:14:19.815284 | 2025-09-02 01:14:19.815453 | TASK [Set zuul-log-path fact] 2025-09-02 01:14:19.833233 | localhost | ok 2025-09-02 01:14:19.846359 | 2025-09-02 01:14:19.846493 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-02 01:14:19.873130 | localhost | ok 2025-09-02 01:14:19.877113 | 2025-09-02 01:14:19.877209 | TASK [upload-logs : Create log directories] 2025-09-02 01:14:20.361848 | localhost | changed 2025-09-02 01:14:20.364872 | 2025-09-02 01:14:20.364985 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-02 01:14:20.867019 | localhost -> localhost | ok: Runtime: 0:00:00.007125 2025-09-02 01:14:20.876504 | 2025-09-02 01:14:20.876722 | TASK [upload-logs : Upload logs to log server] 2025-09-02 01:14:21.535393 | localhost | Output suppressed because no_log was given 2025-09-02 01:14:21.538378 | 2025-09-02 01:14:21.538505 | LOOP [upload-logs : Compress console log and json output] 2025-09-02 01:14:21.596014 | localhost | skipping: Conditional result was False 2025-09-02 01:14:21.604843 | localhost | skipping: Conditional result was False 2025-09-02 01:14:21.613757 | 2025-09-02 01:14:21.613992 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-02 01:14:21.662954 | localhost | skipping: Conditional result was False 2025-09-02 01:14:21.663208 | 2025-09-02 01:14:21.669192 | localhost | skipping: Conditional result was False 2025-09-02 01:14:21.674133 | 2025-09-02 01:14:21.674265 | LOOP [upload-logs : Upload console log and json output]